text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
Accepted for Publication in the Astrophysical Journal 1Department of Astronomy, The Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 2Max-Planck-Institut für Astronomie, Königstuhl 17, 69117, Heidelberg, Germany 3CNRS, IRAP, 9 av. du Colonel Roche, BP 44346, F-31028 Toulouse cedex 4, France 4Université de Toulouse, UPS-OMP, IRAP, F-31028 Toulouse cedex 4, France 5Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität Heidelberg, Mönchhofstraße 12-14, 69120 Heidelberg, Germany 6Max-Planck-Institut für extraterrestrische Physik, Giessenbachstraße 1, 85748 Garching, Germany 7Institute für theoretische Astrophysik, Zentrum für Astronomie der Universität Heidelberg, Albert-Ueberle Str. 2, 69120 Heidelberg, Germany. 8Princeton University Observatory, Peyton Hall, Princeton, NJ 08544-1001 9Departamento de Astronomía, Universidad de Chile, Casilla 36-D, Santiago, Chile 10Centro de Astrofísica y Tecnologías Afines (CATA), Camino del Observatorio 1515, Las Condes, Santiago, Chile 11Visiting Astronomer, Observatories of the Carnegie Institution for Science, 813 Santa Barbara St, Pasadena, CA, 91101, USA 12Department of Astronomy, Laboratory for Millimeter-wave Astronomy, and Joint Space Institute, University of Maryland, College Park, Maryland 20742, USA 13Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany 14Observatorio Astronómico Nacional (IGN), C/ Alfonso XII, 3, 28014 Madrid, Spain 15Instituto Radioastronomía Milimétrica (IRAM), Av. Divina Pastora 7, Nucleo Central, E-18012 Granada, Spain 16European Southern Observatory, Karl-Schwarzschild-Straße 2, D-85748 Garching, Germany 17Institut de Radioastronomie Millimètrique (IRAM), 300 Rue de la Piscine, F-38406 Saint Martin d'Hères, France 18Observatoire de Paris, 61 Avenue de l'Observatoire, F-75014 Paris, France 19Center for Cosmology & Astro-Particle Physics, The Ohio State University, Columbus, OH 43210 We compare the structure of molecular gas at 40 pc resolution to the ability of gas to form stars across the disk of the spiral galaxy M51. We break the PAWS survey into 370 pc and 1.1 kpc resolution elements, and within each we estimate the molecular gas depletion time (τ_ Dep^ mol), the star formation efficiency per free fall time (ϵ_ ff), and the mass-weighted cloud-scale (40 pc) properties of the molecular gas: surface density, Σ, line width, σ, and b≡Σ/σ^2∝α_ vir^-1, a parameter that traces the boundedness of the gas. We show that the cloud-scale surface density appears to be a reasonable proxy for mean volume density. Applying this, we find a typical star formation efficiency per free-fall time, ∼ 0.3-0.36%, lower than adopted in many models and found for local clouds. More, the efficiency per free fall time anti-correlates with both Σ and σ, in some tension with turbulent star formation models.The best predictor of the rate of star formation per unit gas mass in our analysis is b ≡Σ / σ^2, tracing the strength of self gravity, with τ_ Dep^ mol∝ b^-0.9. The sense of the correlation is that gas with stronger self-gravity (higher b) forms stars at a higher rate (low τ_ Dep^ mol). The different regions of the galaxy mostly overlap in τ_ Dep^ mol as a function of b, so that low b explains the surprisingly high τ_ Dep^ mol found towards the inner spiral arms found by by Meidt et al. (2013).§ INTRODUCTIONIn the local universe, star formation occurs in molecular gas. The recent star formation rate (SFR) correlates better with tracers of molecular gas than tracers of atomic gas <cit.>, even though atomic gas represents the dominant reservoir by mass of the interstellar medium (ISM) in galaxies at z=0 <cit.>. But even within the molecular ISM of a galaxy, only a small fraction of the gas participates in star formation at any given time <cit.>, and the properties of molecular gas vary among galaxies and among regions within galaxies <cit.>. The SFR per unit molecular gas mass should depend on these properties: e.g., the density, turbulence, and balance of potential and kinetic energy. As a result, weexpect star formation to proceed at different specific (per unit gas mass) rates in different environments.Observations indeed indicate that the SFR per unit molecular gas mass does vary across the local galaxy population <cit.>. High stellar mass, early type galaxies show comparatively low SFRs per unit H_2 mass <cit.>. Starbursts, especially galaxy-wide bursts induced by major galaxy mergers, have a high SFR per unit H_2 mass <cit.>. So do some galaxy centers <cit.>. Low stellar mass, low metallicity, late-type galaxies exhibit a high SFR per unit CO emission <cit.>. Although the translation of CO emission into H_2 mass remains uncertain in these systems <cit.>, several works argue that the SFR per H_2 mass is indeed higher in these systems <cit.>. Within galaxies, dynamical effects can both enhance <cit.> and suppress <cit.> the SFR-per-H_2. As our ability to observe the molecular ISM across diverse environments improves, the list of observed variations in the SFR per unit H_2 mass continues to grow.Though driven by large-scale environmental factors, the observed SFR-per-H_2 variations must have their immediate origins in the properties of the clouds that host star formation. That is, in an environment with a high SFR per unit gas mass, we expect the configuration and small-scale physical properties of the molecular ISM to be more conducive to star formation.Recent theoretical work exploring variations in SFR-per-H_2 has focused on the properties of turbulent molecular clouds. In such models, the mean density, gravitational boundedness, and Mach number of a cloud determine its normalized rate of star formation <cit.>. These properties set the density structure of the cloud and the balance between kinetic and potential energy, determining the fraction of the gas in a directly star-forming, self-gravitating component. In such models, the gravitational free-fall time, τ_ ff∝ρ^-0.5, often emerges as the characteristic timescale for star formation at many scales <cit.>, albeit with a low efficiency per τ_ ff <cit.>.Observations and theory suggest that the turbulent motions in molecular clouds are driven at about at the scale of an individual cloud <cit.> making this the relevant scale for many of the models referenced above. Current millimeter-wave telescopes can observe the structure of molecular gas at these scales across large areas of galaxies. This allows the prospect to measure how the cloud-scale structure of the cold ISM relates to the ability of gas to form stars in different galactic environments.In this paper, we carry out such a study targetingM51. Our key data set is the PdBI Arcsecond Whirlpool Survey[This work is based on observations carried out with the IRAM NOEMA Interferometer and the IRAM 30-m telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain).] <cit.>. PAWS mapped CO  emission from the inner 9 × 6 kpc of M51 at 40 pc resolution <cit.>. From PAWS, we know the structure of the turbulent ISM at the scale of an individual giant molecular cloud <cit.>. Combining this information with infrared maps from Herschel and Spitzer <cit.>, we measure how the cloud-scale structure of the ISM relates to M51's ability to form stars.This analysis builds on studies by <cit.>, <cit.>, and <cit.>, which showed that the cloud-scale ISM structure in M51 depends on environment. We also follow <cit.>, <cit.>, <cit.>, and <cit.>, who compare gas and star formation in M51 and came to apparently contradictory conclusions regarding whether star formation proceeds more quickly or more slowly in the highest density regions. In particular, we follow <cit.> who also compared PAWS to infrared (IR) data, focusing on the impact of dynamics on the ability of gas to form stars.We use the methodology described by <cit.>. In this approach, we calculate the molecular gas depletion time, τ_ Dep^ mol≡ M_ mol/ SFR, averaged over a moderate-sized area, θ = 370-1100 pc, and compare this to the mass-weighted 40 pc surface density, line width, and self-gravity (virial parameter) with in the larger beam. This approach captures both ensemble averages and local physical conditions. We expect that τ_ Dep^ mol becomes well-defined only after averaging over an ensemble of star-forming regions in different evolutionary states <cit.>. Meanwhile the beam-by-beam 40 pc structural measurements from PAWS allow us to test expectations from turbulent theories. By taking the mass-weighted average within each larger beam, we preserve the small scale structural information in the PAWS map.§ METHODSccccccccccccCloud Scale Structure, IR, and CO in M510ptR.A.Dec. Beam r_ gala Σ_ molb Σ_ SFRc bbf_ armdf_ iadf_ ctrd ()() () (kpc) (M_⊙ pc^-2) (M_⊙  yr^-1/ kpc^2) (M_⊙ pc^-2) (km s^-1) ( M_⊙  pc^-2/( km s^-1)^2) ( ) ( ) ( )(1)(2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12)202.46964 47.19517 30 0.0 194.1 0.1928 380.8 10.4 3.53 0.01 0.00 0.98 202.46718 47.19806 30 0.4 204.3 0.1909 445.5 10.5 4.03 0.02 0.02 0.97 202.47209 47.19228 30 0.4 179.2 0.1688 353.29.9 3.57 0.03 0.01 0.96 202.46718 47.19228 30 0.5 200.6 0.1698 412.8 11.1 3.37 0.07 0.00 0.92 202.47209 47.19806 30 0.5 178.0 0.1749 368.29.8 3.81 0.05 0.01 0.94 202.46474 47.19517 30 0.5 223.7 0.1897 477.4 11.5 3.60 0.03 0.01 0.96 202.47453 47.19517 30 0.5 180.7 0.1775 365.5 10.2 3.52 0.02 0.00 0.97 202.46964 47.18940 30 0.8 170.8 0.1339 367.6 10.4 3.40 0.15 0.02 0.83 202.46964 47.20094 30 0.8 165.1 0.1493 394.99.6 4.33 0.10 0.04 0.86 202.47699 47.19228 30 0.8 161.7 0.1482 383.6 10.3 3.62 0.02 0.02 0.96aGalactocentric radius for a thin disk and the orientation parameters quoted in Section <ref>. bMolecular mass linearly translated from CO surface brightness using α_ CO=4.35 . cHere, the SFR is a linear transformation of the TIR emission. See Section <ref>. dFraction of the CO flux in the beam that arises from arm, interarm, or central regions as defined following <cit.>. The full version of this table is available as online material. The following uncertainties apply: (a) uncertainty in the distance, ∼10%, linearly affects r_ gal, (b) for Σ_ mol a ∼10% gain uncertainty applies to both resolutions, the statistical noise is on average 2.25 M_⊙ pc^-2 at 10 resolution and 0.5 M_⊙ pc^-2 at 30 resolution, (c) calibration uncertainties are of order 5-10%, multiband TIR estimates from <cit.> uncertain by ∼0.08 dex, translation from 70 μm to TIR scatters by an additional ∼0.05 dex, and statistical noise is ∼ 2.5 × 10^6 L_⊙ kpc^-2 (≈ 4 × 10^-4 M_⊙ yr^-1 kpc^-2) at 30 and ∼ 9.4 × 10^6 L_⊙ kpc^-2 at 10 resolution (∼ 1.3 × 10^-4 M_⊙ yr^-1 kpc^-2), (d) from our Monte Carlo calculation, typical statistical uncertainties in , , andare 1.5%, 2%, and 3% at 30 resolution and 4%, 5%, and 6% at 10 resolution. Covariance in uncertainty at both resolutions is about 0.7 betweenand , -0.4 betweenand , and -0.9 betweenand . The ∼10% gain uncertainty also applies toand . These uncertainties do not account for translation to physical quantities. Selection criteria: at 30, we include all lines of sight where at least 50% of the beam lies in the PAWS field. At 10, we include all lines of sight where 95% of the beam lies in the PAWS field, Σ_ mol > 5 M_⊙ pc^-2, and Σ_ SFR > 7.5 × 10^-3 M_⊙ yr^-1 kpc^-2. We wish to measure how small-scale ISM structure relates to the ability of gas to form stars in M51. To do this, we require region-by-region estimates of the recent star formation rate, the molecular gas reservoir, and the structure of molecular gas on the scale of an individual cloud. Using these, we correlate the cloud-scale structure of the molecular gas with the star formation rate per unit gas mass, expressed as a gas depletion time, τ_ Dep^ mol.We estimate these quantities and conduct a correlation analysis at 30 and 10 resolution. These correspond to linear resolutions of ∼ 1100 pc and ∼ 370 pc at our adopted distance of 7.6 Mpc to M51 <cit.>. At 30 resolution, we are able to include more IR bands in our SFR estimate. At 10 resolution, we are better able to resolve the dynamical features that drive the differences within the M51 cloud population <cit.>. At resolutions finer than 10, we cannot include infrared (IR) emission, our main SFR indicator (see appendix).The choice of a few hundred pc to a kpc scale also ensures that within a resolution element we average over many individual star-forming regions. This allows us to avoid most effects related to the time evolution of individual regions <cit.>, and so to better access the time-averaged behavior of the ISM. The evolutionary effects revealed at high resolution are explored in E. Schinnerer, A. Hughes et al. (in preparation) and M. Chevance, J. M. D. Kruijssen et al. (in preparation).In practice, we record the properties of M51 at each point in a hexagonally-packed, half beam-spaced grid <cit.>. Figure <ref> shows the individual sampling points for these two grids, overlaid on the Herschel 70μm map <cit.>. §.§ Data <cit.> and <cit.> present PAWS, which mapped CO  emission from the central region of M51 at 1.16× 0.97∼ 1.06∼ 40 pc resolution with ∼ 5 km s^-1 velocity resolution. PAWS includes short and zero-spacing information. <cit.> also summarize the multiwavelength data available for M51, with references (see their Table 2).We also use broad band maps of IR emission from Herschel and Spitzer. These were obtained as part of the Spitzer Infrared Nearby Galaxy Survey <cit.> and the Herschel Very Nearby Galaxies Survey <cit.>. §.§ MeasurementsIntegrated CO Intensity: At 30 resolution, we use the PAWS single dish map <cit.> to measure the integrated CO intensity. At 10, we convolve the combined interferometer and single dish cube to a coarser 10 resolution and measure the integrated intensity from this degraded map. As discussed by <cit.>, the deconvolution of the hybrid 30m+PdBI map recovers 99% flux of the galaxy observed with the IRAM .To collapse the 30 and 10 cubes to integrated intensity measurements, we sum over a broad velocity window from -70 to +70 km s^-1 about the local mean velocity. The signal-to-noise in CO  is very high, so some empty bandwidth is not a concern. We estimate the associated uncertainty by measuring the rms noise of the convolved line cube from the signal-free region. Then the statistical uncertainty in the integrated intensity is the sum in quadrature of the per-channel intensity noise across all channels in the velocity integration window multiplied by the velocity width of a channel.Total Infrared Surface Brightness: We convolve the IR data to have Gaussian beams using the kernels of <cit.>. Then, using the formulae of <cit.>, we combine Spitzer 24 μm and 70 μm intensities with Herschel 160 μm and 250 μm intensities to estimate a total infrared luminosity (TIR) surface brightness, Σ_ TIR, for each resolution element. This is our basic measure of star formation activity throughout this paper.At 10 resolution, we can only use the Herschel 70 μm data. We calculate the coefficient to translate I_70 to Σ_ TIR by comparing the two quantities at 30 resolution, where we know Σ_ TIR from the four-band calculation following <cit.>. In the PAWS field, the ratio Σ_ TIR / I_70 varies modestly as a function of radius, presumably reflecting a radial change in dust temperature. We find: Σ_ TIR/I_70 = 10^6f(r_ gal)  if  r_ gal < 2.5  kpc2.96  if  r_ gal > 2.5  kpc where f(r)=1.93 + 0.01r +0.28r^2 - 0.048r^3 . Here r_ gal refers to the deprojected galactocentric radius and f(r) is a polynomial fit to the ratio Σ_ TIR/I_70 as a function of r inside r_ gal = 2.5 kpc. Σ_ TIR has units of L_⊙ kpc^-2, I_70 has units of MJy sr^-1, and r_ gal has units of kpc. Outside r_ gal∼ 2.5 kpc, the ratio appears flat. The appendix compares SFRs derived from I_70 using this approach to those using Σ_ TIR at θ = 30. The two show a median ratio of 1, less than 10% scatter and no clear systematics across the PAWS field. At 30, we measure the rms scatter in Σ_ TIR from the low intensity regions of the map to be ∼ 2.5 × 10^6 L_⊙ kpc^-2. At 10, using only the 70μm data, the rms scatter is higher, ∼ 8.5 × 10^6 L_⊙ kpc^-2.Cloud Scale Properties: We measure the intensity-weighted cloud-scale properties of the gas in each beam following <cit.>. In brief, we begin with the native 40 pc resolution PAWS cube. We recenter each spectrum about the local mean velocity. Next, we weight each spectrum by the integrated intensity along the line-of-sight and convolve from 1≈ 40 pc resolution (our “measurement scale”) to 10≈ 370 pc or 30≈ 1.1 kpc (our “averaging scales”). From these intensity-weighted, stacked spectra, we measure the integrated intensity and line width of the gas. This cross-scale weighted averaging also resembles that by <cit.>.Because of the intensity (∼ mass for fixed α_ CO) weighting, this approach captures the high resolution structure of the emission within each larger averaging beam. <cit.> demonstrated that the results match those from mass-weighted averages of cloud catalogs well, but with far fewer assumptions. We write the resulting measurements as, e.g., . This is read as “the mass-weighted average 40 pc surface density within a larger beam[More rigorously, following <cit.> we would also indicate the size of that larger beam (the “averaging scale”) when quoting . In this paper the plots, discussion and tables make it clear whetherrefers to an averaging scale of 370 pc (_ 370pc or 1.1 kpc (_ 1.1kpc.].” We focus on three such measurements: * The cloud-scale molecular gas surface density, . This is a linear translation of the integrated intensity, = α_ CO < I_ 40pc>, where α_ CO is our adopted CO-to-H_2 conversion factor. If the line-of-sight length of the gas distribution, h, is known or assumed, thencan be used to estimate the volume density of the gas on 40  pc× h scales, . From this, one can estimate the gravitational free-fall time, . We show in Section <ref> that for published Milky Way and M51 cloud catalogs, Σ_ mol and ρ do correlate well. * The rms line width of CO, , measured from the “equivalent width” and corrected for channelization and channel-to-channel correlation following <cit.>. For a given temperature and when the line width is purely turbulent in nature, this corresponds to the turbulent Mach number, ℳ.may also contain a contribution from bulk motions unresolved at the 40 pc resolution of PAWS <cit.>. Thermal contributions to the line width are expected to be small. * The dynamical state of the gas, as traced by the ratiowith b ≡Σ_ mol / σ^2. This “boundedness parameter” also relies on an adopted CO-to-H_2 conversion factor. Within a length scale b is proportional to UE/KE, the ratio of potential energy (UE) to kinetic energy (KE). This is the inverse of the virial parameter, b^-1∝α_ vir≈ 2KE/ UE. When b is high, the gas should be more gravitationally bound.Also, within a length scale ∝τ_ cross^2 / τ_ ff^2, where τ_ ff∝ 1 / √(M/R^3) is the free-fall time and τ_ cross∼ R / σ is the crossing time for the measured velocity dispersion. This ratio has been highlighted by <cit.> as a key driver for the star formation efficiency per free fall time.Note that in this paper we focus on b ≡Σ_ mol / σ^2. This differs fromthe B ≡ I_ CO/σ^2 discussed in <cit.> by a factor of α_ CO, so that b = α_ CO B. While B has the advantage of being directly computed from observable quantities, b ∝α_ vir^-1 is more closely linked to the physical state of the gas. We estimate uncertainties in , , andusing a Monte Carlo approach. We measure the noise in the stacked, shuffled intensity weighted spectra from the signal-free region. Then we realize 100 versions of each spectrum, adding random noise to the real spectrum. For each case, we remeasure , , and . We compare these to our measurements without added noise, which we take to be the true values for purpose of this exercise. The rms offset between the simulated noisy data and the true value yields our estimate of the noise. This approach is ad hoc but yields realistic statistical uncertainties and captures the covariance among the uncertainties on , , and . §.§ Conversion to Physical ParametersWe report our results in terms of simple transformations of observable quantities into physical parameters.Galactocentric Coordinates: Following <cit.>, we assume an inclination i=22 <cit.> and a position angle P.A. = 172 <cit.> with the galaxy center at α_2000 = 13^ h 29^ m 52.7^ s, δ_2000 = +47 11 43 <cit.>. We adopt the 7.6 Mpc distance of <cit.> and <cit.>.CO -to-H_2: We estimate H_2 mass from CO  emission using a CO-to-H_2 conversion factor α_ CO = 4.35 , which includes helium. This is a standard value for the Galaxy <cit.>. In the appendix, we show that a dust-based approach following <cit.> and <cit.> suggests approximately this value. <cit.> came to the same conclusion via a multi-line CO analysis of the spiral arms. B. Groves et al. (in preparation) show that this value applies with only weak variations across the disk of M51a using several independent methods. There have been other values suggested for M51, mostly lower than Galactic by a factor of ≈ 2 based on dust observations <cit.>. We discuss the impact of a lower conversion factor in the text.Σ_ TIR to Σ_ SFR: When relevant, we recast the TIR surface brightness as an SFR surface density using the conversion of <cit.>, which assumes a <cit.> initial mass function and reduces toΣ_ SFR/ M_⊙  yr^-1  kpc^-2≈ 1.5 × 10^-10 Σ_ TIR/ L_⊙  kpc^-2 . A large body of work explores the subtleties of SFR estimation, often in M51 <cit.>. Our focus in this paper is new diagnostics of the molecular medium. Given the overwhelming extinction in the inner region of M51, we adopt the simple, widely accepted SFR diagnostic of TIR surface brightness. As a check, the appendix shows the impact of several alternative SFR prescriptions on our inferred molecular gas depletion time at 30 resolution. These matter mainly to the overall normalization. By using the TIR emission, it is likely that we somewhat overestimate Σ_ SFR. One of our key findings is thatis low (Section <ref>); this result would be even stronger if we used a tracer that yields lower Σ_ SFR. The systematic trends appear weak and, when present, go opposite the sense needed to yield a fixed . We intend to revisit this assumption in more detail in future work, ideally using an extinction-robust SFR tracer with high angular resolution to measure SFR on the scale of individual clouds.§.§ Mapping to Dynamical RegionGalactic dynamics relate to molecular gas structure and star formation in M51 <cit.>. With this in mind, we separate our correlation analysis by dynamical regions (Section <ref>). To do this, we use the dynamical region masks created by <cit.>. We use their simplified region definition, which breaks the PAWS field into “arm”, “interarm”, and “central” regions. For each 10 or 30 sampling point, we convolve the PAWS integrated CO intensity map multiplied by the mask for each separate region to the working resolution. Then we note the fraction of the flux in each beam coming from each dynamical region. When most of the CO flux in a beam comes from one dynamical environment, we associate the results for that beam with that environment.Note that the three-region version of the <cit.> mask may still group together physically distinct environments. We treat the upstream and downstream interarm regions together <cit.>, and the “center” groups together the star-forming central molecular ring and the nucleus, which is more quiescent and potentially contaminated by the active galactic nucleus <cit.>.§ RESULTSFigure <ref> shows the scaling between TIR surface brightness, tracing Σ_ SFR, and CO intensity, tracing Σ_ mol. The left panel shows all of M51 at 30≈ 1.1 kpc resolution. The right panel includes only data from the PAWS field, plotting the 10 resolution measurements in blue, the 30 measurements in red, and the 13 measurements of <cit.> for selected apertures in green.Over the whole of M51 (gray points), our data imply a molecular gas depletion time τ_ Dep^ mol≈ 1.5 Gyr with ∼ 0.2 dex scatter. In the PAWS field (red points), the numbers are about the same, τ_ Dep^ mol≈ 1.7 Gyr with ∼ 0.1 dex scatter. The median at 370 pc resolution remains τ_ Dep^ mol = 1.6 Gyr, but with larger scatter ∼ 0.3 dex.This resembles the τ_ Dep^ mol≈ 1-2 Gyr found at the same resolution for a larger sample of similar nearby disk galaxies by <cit.>. That study includes M51, but using different data. Our results agree qualitatively with their specific results for M51, including the presence of high IR-to-CO (low τ_ Dep^ mol) regions in the inner galaxy. Here, the CO data have much higher signal-to-noise and we use only IR data to trace recent star formation. The right panel in Figure <ref> shows that our data also agree to first order with the measurements by <cit.>. They targeted star-forming peaks with a different measurement strategy, 13 aperture photometry, and use yet another CO map <cit.> and approach to Σ_ SFR, combining Paschen α and 24 μm emission.Over the full area of M51 (left panel), the scaling between IR and CO exhibits a somewhat “bowed” shape moving from outside the PAWS field (the gray points at low Σ_ mol) to the inner disk (red points at high Σ_ mol). That is, the slope of the relation is slightly sublinear at low Σ_ mol and superlinear at high Σ_ mol.This curvature, which can be seen in the running mean (black-and-white squares) in the left panel, helps explain why different studies targeting M51 have come to apparently contradictory conclusions regarding the slope of the SFR-gas scaling relation <cit.>. Those studying the inner part of the galaxy, especially at higher resolution using interferometers, see the superlinear slope evident at high surface densities. Those excluding the inner regions <cit.> and targeting a wider area find a modestly sub-linear slope. That is, given the curved shape of the relation in the left panel of Figure <ref> we do not expect a single power law to fit all of M51. Note that this does not explain all of the scatter in the M51 literature, methodological differences including fitting and sampling strategy have also played a role <cit.>.The right panel of Figure <ref> shows that at higher resolution, the IR surface brightness scatters more at fixed Σ_ mol, a result that has been measured before <cit.>. The dependence of scatter on scale may be attributed to evolution of individual star-forming regions <cit.>, and the 0.3 dex scatter at ∼ 370 pc resolution appears consistent with scatter expected from evolution in the <cit.> model. Our 370 pc measurements may be more stochastic than the 1.1 kpc calculations, but they also allow us to better isolate the physical conditions relevant to star formation. We capture more variation in local cloud populations and are better able to separate the galaxy into distinct regions. Below we find a larger range of ISM structure at 370 pc than 1.1 kpc, as well as stronger correlations between environment and ISM structure and distinct results for different dynamical regions. At 370 pc resolution, we do observe substantial variation in Σ_ TIR at a given Σ_ mol, including a wide range of Σ_ TIR at high Σ_ mol≳ 100 M_⊙ pc^-2. For Σ_ mol = 30-100 M_⊙ pc^-2, the median τ_ Dep^ mol≈ 2 Gyr with 0.25 dex scatter. For Σ_ mol > 100 M_⊙ pc^-2, the median τ_ Dep^ mol drops to 1.6 Gyr but now with 0.37 dex scatter. High IR-to-CO ratios (low τ_ Dep^ mol) are preferentially found at high surface densities, which has helped fuel the result of superlinear power law scalings for Σ_ SFR vs Σ_ mol in M51 <cit.>. But there are also many lines of sight with high Σ_ mol and relatively weak IR emission. These unexpected gas-rich, but relatively IR-weak, regions were highlighted by <cit.>. They argued that in these regions streaming motions suppress the collapse of gas.Are these region-to-region variations in τ_ Dep^ mol driven by changes in the local structure of the gas? In the rest of this section, we explore this idea by comparing τ_ Dep^ mol to the local mean 40 pc cloud scale surface density, velocity dispersion, and gravitational boundedness. §.§ Cloud Scale Surface Density and τ_ Dep^ molAll other things being equal, high surface density gas should be denser, with a shorter collapse time, τ_ ff. Do the variations in τ_ Dep^ mol in Figure <ref> arise from changes in the cloud scale gas density across the galaxy? Figure <ref> tests this expectation, plotting τ_ Dep^ mol as a function of , the mass-weighted cloud scale surface density in each beam. Table <ref> quantifies what we see in the Figure, reporting rank correlation coefficients betweenand τ_ Dep^ mol for different ranges of .We do find a weak anti-correlation between τ_ Dep^ mol andover the range ≈ 100-350 M_⊙ pc^-2. Treatingas the independent variable yields τ_ Dep^ mol∝^-α with α = 0.25-0.35 over this range. The rank correlation coefficient over this range is only -0.14, but still statistically significant.Our simplest expectation would be τ_ Dep^ mol∝^-0.5. This would be expected if ρ∝ (which appears reasonable, see Section <ref>), and stars formed from gas with a fixed efficiency per τ_ ff. Gray lines in the Figure illustrate this slope, which is steeper than the relation that we find. Soover ∼ 100-350 M_⊙ pc^-2 denser (or at least higher ) gas does appear to form stars at a higher normalized rate, but the efficiency per free fall time decreases (weakly) asincreases.At higher > 350 M_⊙ pc^-2, τ_ Dep^ mol increases with increasing , though with large scatter. This leads to the unexpected result pointed out by <cit.> that some of the least efficient star-forming regions in M51 have high cloud-scale molecular gas surface density. We show below that although these regions have high surface densities, they also appear to be less gravitationally bound (higher α_ vir; Section <ref>). lccccccRank Correlation Relating Cloud-Scale Structure with τ_ Dep^ mol and 0ptQuantityvs. τ_ Dep^ mol vs. τ_ Dep^ mol vs.vs.vs.vs. fixed h h_ dyn fixed h h_ dynat θ=30 at θ=10 at θ=30 at θ=30 at θ=10 at θ=10 … all data -0.14 (0.304) + 0.02 (0.630) -0.59 (0.000) -0.78 (0.000) -0.47 (0.000) -0.61 (0.000) … 100 << 350  M_⊙/ pc^-2 -0.17 (0.307) -0.14 (0.016) -0.51 (0.001) -0.67 (0.000) -0.15 (0.006) -0.30 (0.000) … > 350  M_⊙/ pc^-2 + 0.35 (0.202) + 0.20 (0.089) -0.51 (0.039) -0.62 (0.010) -0.38 (0.002) -0.45 (0.000) … all data + 0.14 (0.278) + 0.26 (0.000) -0.75 (0.000) -0.78 (0.000) -0.61 (0.000) -0.60 (0.000) … 100 << 350  M_⊙/ pc^-2 + 0.19 (0.254) + 0.25 (0.000) -0.69 (0.000) -0.62 (0.000) -0.42 (0.000) -0.33 (0.000) … > 350  M_⊙/ pc^-2 + 0.84 (0.000) + 0.64 (0.000) -0.89 (0.000) -0.84 (0.000) -0.72 (0.000) -0.66 (0.000) … all data -0.67 (0.000) -0.42 (0.000) + 0.15 (0.270) -0.19 (0.145) + 0.13 (0.004) -0.13 (0.005) … 100 << 350  M_⊙/ pc^-2 -0.61 (0.001) -0.49 (0.000) + 0.30 (0.064) -0.07 (0.672) + 0.36 (0.000) + 0.09 (0.124) … > 350  M_⊙/ pc^-2 -0.79 (0.000) -0.64 (0.000) + 0.73 (0.001) + 0.56 (0.031) + 0.59 (0.000) + 0.46 (0.000) Parenthetical values report the fraction of 1,000 random re-pairings (accounting for an oversampling factor of 4) that exceed the rank correlation of the true data. They can be read as Monte Carlo p values.with “fixed h” assumes a fixed line of sight depth of 100 pc.with h_ dyn uses Equation <ref>.§.§ Efficiency per Free-fall TimeGiven a distribution of gas along the line of sight,traces , the volume density of the gas averaged over the θ = 40 pc beam. In turn,determines the gravitational free-fall time, τ_ ff. Contrasting τ_ ff with the measured τ_ Dep^ mol yields the efficiency per free fall time, ϵ_ ff. An approximately fixed ϵ_ ff is argued to hold across scale and system by, e.g., <cit.>. More generally, τ_ ff is taken as the governing timescale for star formation, even when ϵ_ ff is low.For gas with a depth h along the line-of-sight, = <Σ_ 40pc> / h = √(3π / (32 G ρ )) = 81  Myr ( <Σ_ 40pc>/h_ 100pc)^-0.5=  / τ_ Dep^ mol where h_100 is the depth of the molecular gas layer along the line-of-sight normalized to a fiducial value of 100 pc.is the efficiency per free-fall time, obtained by contrasting τ_ Dep^ mol with .The gray diagonal lines in Figure <ref> show τ_ Dep^ mol∝Σ_ mol^-0.5. If h_100 remains fixed, then each of these lines corresponds to a fixed ϵ_ ff. In Figure <ref>, we show the distribution ofimplied by our measurements. We plot results for both working resolutions and show values for a fixed h=100 pc (top) and h ∝ ^-1 (bottom, see explanation Section <ref>). We also illustrate the range of ϵ_ ff measured by several Milky Way studies.We find values ofthat are low in both the absolute sense and relative to theoretical and Milky Way values. We also findto vary as a function of environment and the local cloud population. Before discussing this in detail, we motivate our adopted h (Section <ref>) and demonstrate thatindeed should be a good predictor of(Section <ref>).§.§.§ What Line of Sight Depth to Use?The depth of the gas layer along the line-of-sight h affectsand so . We do not observe h, but we can make a reasonable estimate. The most common approach is to measure the radius of a GMC on the sky and then assume spherical symmetry. In cloud catalogs for the Milky Way <cit.> and M51 <cit.>, most CO luminosity arises from clouds with radii ∼ 40-60 pc. The left panel in Figure <ref> shows the distribution of CO luminosity as a function of cloud radius for these three catalogs. The figure shows similar distributions for the <cit.> M51 catalog and the inner (r_ gal < 8.5 kpc) Milky Way portion of the recent <cit.> catalog. In both cases, 68% of the luminosity comes from clouds with ∼ 30  pc < R < 95  pc, with the mid-point for CO emission R ∼ 60 pc. The <cit.> re-analysis of the <cit.> Milky Way clouds (their “A1”) yields slightly smaller cloud sizes, ∼ 20  pc < R < 65  pc, with median R ∼ 40 pc.Estimates of the thickness of the molecular gas layer in both the Milky Way and M51 yield a similar value. <cit.> compile estimates of the thickness of the molecular gas disk in the Milky Way (their Figure 6). They find 90-120 pc (FWHM) within the Solar Circle. For M51, <cit.> assumed the molecular gas to be in hydrostatic equilibrium. Following <cit.>, they calculated a mean FWHM thickness ≈ 94 pc for the compact portion of the CO disk. If we consider the average density within FWHM ≈ 90 pc, then the corresponding h to use in Equations <ref> is h ≈ 90 / 0.68 = 132 pc.Thus both estimates of the thickness of the molecular disk and GMC catalogs support our adopted h ∼ 100 pc. Because τ_ ff∝ h^-0.5, modest variations in h do not have a large impact on . Still, we test the impact of varying h by considering the case where the dynamical state of clouds (i.e., the virial parameter) is fixed. Then M ∝ r σ^2 and r ∝σ^2 / Σ = b^-1. The same result applies for gas in a thin disk with only self-gravity. In this case: h_ dyn≡ 100  pc ^-1 . Note that in this situation, where b reflects a changing line of sight depth and not a changing dynamical state, ^-1 andare both linearly proportional to . Then we expect a similar relation of τ_ Dep^ mol to both variables. Below we show that this is not the case, and our best estimate is that b in fact does reflect a changing dynamical state, not a changing line of sight depth. Thus, we consider the case of fixed h=100 pc to represent our basic result, and use Equation <ref> to check the robustness of our conclusions.§.§.§ Cloud Surface and Volume DensityThe free fall time depends on the volume density, , but we observe the surface density, . Although it has not been emphasized, these quantities do correlate well in current GMC catalogs. For the Milky Way and M51 catalogs mentioned above, the right panel in Figure <ref> shows the volume density of each cloud, ρ = M / (4/3 π R^3), as function of its surface density, Σ = M / π R^2. Surface and volume density correlate well, with rank correlation coefficients of 0.90 <cit.>, 0.51 <cit.>, and 0.72 <cit.>.Our mean inferred value fordoes not depend on the assumption thatmaps perfectly to . Because τ_ ff depends weakly on h, it only matters that our adopted line-of-sight depth be roughly correct. But Figure <ref> argues that a stronger case holds. The highly observable cloud scale surface density appears to be a reasonable proxy for the physically important, but hard to directly access volume density. More work on this topic is needed, but the right panel in Figure <ref> offers an encouraging sign for extragalactic studies. Cloud scale mapping of CO surface brightness appears to offer a useful path to probe the mean volume density.§.§.§ Low Efficiency Per Free-fall Time In Figure <ref>,varies between 10^-3 and 10^-2. For both treatments of h, the medianis 3.6 × 10^-3 with 0.3 dex scatter at θ = 10 resolution, and 3.0 × 10^-3 with 0.11 dex scatter at θ = 30 resolution.These values ofare low in the absolute sense, with only 0.1-1% of the gas converted to stars per collapse time. They are also low relative to some expectations from theory and previous work on the Milky Way, though they agree with previous indirect extragalactic estimates of .Comparison to Estimates at Large Scales: Our ≈ 0.3% agrees with the calculation by <cit.>, who compared τ_ ff for Galactic GMCs to a typical τ_ Dep^ mol for nearby disk galaxies. In a similar vein, our medianis only a factor of ∼ 2 lower than the estimate by <cit.> of a Milky Way disk-averaged ϵ_ ff≈ 0.6%.Observations comparing dense gas, CO, and recent star formation also suggest a low ϵ_ ff. <cit.> and <cit.> observed dense gas tracers, CO, and recent star formation in nearby star-forming galaxies. The combination of these three measurements is sensitive to the density of the gas and to the star formation per unit gas. Thus it depends on ϵ_ ff, though in a model-dependent way. <cit.> and <cit.> argued that a low ϵ_ ff≈ 0.2% appears to be required in order for their observations to match the turbulent cloud models of <cit.>.Theoretical Values: Our ∼ 0.3% is lower than the ϵ_ ff≈ 1% expected at the outer scale of turbulence by <cit.> and <cit.>. Our values are about half of the ϵ_ ff≈ 0.5% noted by <cit.>. They are also lower than the values commonly adopted by numerical simulations of galaxies <cit.> or found by simulations of individual star-forming regions <cit.>. We note that many of these predictions also depend on the virial parameter <cit.>, with the Mach number, magnetic support, and type of turbulence also playing a role <cit.>. In these cases, matching our observations may be primarily an issue of re-tuning these parameters, though some of these are also constrained by our data (see below).Comparison to Milky Way Results: Our measuredis significantly lower than the mean ϵ_ ff≈ 1.5% found by <cit.> for local clouds, and the median ϵ_ ff≈ 1.8% found by <cit.> based on the <cit.> Milky Way GMC catalog and WMAP-based SFRs. It is also much lower than the median ϵ_ ff≈ 9.5% found by <cit.> for the GMCs associated with the brightest ∼ 32 star-forming complexes in the Milky Way.In the case of <cit.>, this discrepancy is expected. Those clouds were selected based on their association with active star formation, and may have among the highest SFR/M_ gas in the Milky Way. Similarly, the cross matching of <cit.> recovers ∼ 80% of the ionizing photon flux in their star forming complexes, but only ∼ 10% of the GMC mass in the <cit.> catalog. Our observations average over the entire life cycle of clouds present in a large averaging beam, and so can be expected to include the balance of GMC flux. While this has the advantage of better accessing the time averaged behavior of the gas, it also means that we cannot construct a measurement analogous to <cit.> and <cit.>. In the near future, with a 1 resolution extinction-robust SFR tracer, we would be able to associate individual clouds with star forming complexes, and so potentially access the same dynamical evolution of clouds that leads to the high ϵ_ ff in the <cit.> and <cit.> results.Any similar bias towards only star forming clouds in the <cit.> is less clear, but the discrepancy between our “top down” view and the local cloud measurements by <cit.> has also been noted before <cit.>. In detail, <cit.> find a ∼ 5 times shorter τ_ Dep^ mol for their clouds than we see for large parts of M51. They also find a ∼ 4 times shorter τ_ ff. One plausible explanation for the discrepancy is that <cit.> focus on the part of a cloud with A_V > 2 mag (≈ 20 M_⊙ pc^-2). Including a massive extended envelope or diffuse component might bring both τ_ Dep^ mol and τ_ ff into closer agreement with our measured values.§.§.§ Possible Systematic Effects We argue that most of the discrepancy with Milky Way results can be understood in terms of scales sampled and selection effects. However, several systematic uncertainties could affect our measurement, include our star formation rate estimate, adopted CO-to-H_2 conversion factor, and line of sight depth.Star Formation Rate: On average, we would need to be underestimating the SFR of M51 by a factor of 5 to bring our measurements into agreement with the local clouds of <cit.>. Meanwhile, in the appendix we show many likely biases in Σ_ SFR would render our TIR-based calculation an overestimate, including any IR cirrus term <cit.>. Note, however, that <cit.> argue that there may be up to a factor of ∼ 2 offset between the SFR estimates used in local clouds and the tracers used at larger scales, with the local measurements yielding higher values <cit.>. This offset has the right sense, but would have to reach even larger magnitude to bring our observation into agreement with the local clouds. Also, note that <cit.> and <cit.> use ionizing photon rates, similar to extragalactic studies.CO-to-H_2 Conversion Factor:Our adopted α_ CO also affects . We adopt a Galactic conversion factor based <cit.>, <cit.>, the calculations in the appendix, and B. Groves et al. (in prep.). Other work has claimed a lower conversion factor in M51 <cit.>. Although evidence from dust, multi-line analysis, and cloud virial masses support our assumption, the systematic uncertainties in any given determination remain substantial <cit.>. For a lower α_ CO, we would derive a shorter τ_ Dep^ mol, a longer τ_ ff, and a higher , with ∝α_ CO^-1.5. α_ CO has a stronger effect onbecause it affects both τ_ ff and τ_ Dep^ mol. Therefore a conversion factor 0.5 times Galactic would yield ≈ 0.85%.Line of Sight Depth: The adopted line-of-sight depth, h, affects . As emphasized above, our adopted h agrees with both cloud property estimates and modeling of the M51 gas disk. To increase our measuredfrom ∼ 0.3% to ∼ 1%, we would need to increase h by an order of magnitude, to ∼ 1 kpc. Such a scale height disagrees with the measured cloud properties in M51. A more substantial uncertainty in this direction is the role of any “diffuse” CO disk. Up to 50% of the CO emission in M51 has been argued to lie in an extended component <cit.>. The physical nature of such a component remains unclear, but in the limit that it has a large scale height and holds half the gas,for the compact component could increase by a factor of 2 to ∼ 0.6%. In fact, we do not expect this effect to be so strong, as the bright, compact structures in the combined PdBI+30m map do hold a large fraction of the flux <cit.>, but 10s of percent of the CO might lie in such an extended phase. This topic certainly requires more investigation in both the Milky Way and other galaxies.To summarize, our ∼ 0.3-0.36% does represents our best estimate, though systematic uncertainties could plausibly raise this by a factor of ∼ 2. Supporting this conclusion, we note that our calculation agrees within a factor of 2 with previous large scale calculations. Because of the external perspective and averaging approach, we argue that our value represents the correct comparison point for any model aiming to predict a population-averaged . Cloud-by-cloud statistics will need to await future, high resolution SFR maps. §.§ Efficiency Per Free Fall Time, τ_ Dep^ mol and Local Gas PropertiesAt 370 pc resolution we find 0.3 dex scatter in , and Figure <ref> shows a comparable scatter in τ_ Dep^ mol. Beyond only estimating , we aim to understand how the mean gas properties in the beam and the region of the galaxy under consideration influence these two quantities. That is, how much of this scatter is random and how much results from changes in the local gas properties? Both τ_ Dep^ mol andare of interest: τ_ Dep^ mol captures the SFR per unit gas, and represents our most basic observation metric of whether gas in a part of a galaxy is good or bad at forming stars.captures the efficiency of star formation relative to direct collapse, with τ_ ff representing the most common reference point for current theoretical models.§.§.§ Surface Density Figure <ref> shows τ_ Dep^ mol as a function of cloud scale surface density. Figure <ref> shows the corresponding plots for . As discussed above, τ_ Dep^ mol weakly anti-correlates withover the range ≈ 100-350 M_⊙ pc^-2 and then increases, with large scatter towards higher densities. The observed ∼ -0.3 slope relating τ_ Dep^ mol tois shallower than that expected for a fixed . As a result, Figure <ref> showsweakly decreasing with increasingfor the fixed h case. Though the slope in the right panel is shallow, Table <ref> shows thatdoes correlate withover this range with good significance.This trend inis weak compared to the large scatter until > 350 M_⊙ pc^-2, at which pointdrops precipitously. The high τ_ Dep^ mol at highin Figure <ref> correspond to even lower . Thus the very high surface density parts of M51 <cit.> are significantly less efficient than the rest of the galaxy at forming stars relative to the expectation for direct collapse (τ_ ff). The most extreme values in Figure <ref> reach < 0.1%, though ∼ 0.2% represents a more typicalat these high .The right panel adopts our alternate treatment of h (Equation <ref>). The main difference from the left panel is a stronger anti-correlation betweenandat intermediate surface densities (see Tab <ref>). The left panel represents our best estimate, but the consistency between the two suggests that our qualitative results are robust: there is some anti-correlation betweenandat intermediate densities and even lowerat high .Note that the axes in Figure <ref> are correlated because τ_ ff∝^-0.5 in both panels. This built-in correlation is stronger in the right panel because for our dynamical scale height (Equation <ref>) h ∝^-1. The statistical uncertainty inis small, ∼ 5%, and therefore we do not expect correlated noise to affect the results much. The larger issue is that if τ_ Dep^ mol and τ_ ff are unrelated, then ∝^-0.5 for fixed h by construction. That is, the null hypothesis that τ_ ff is not a governing timescale for star formation, we expect an anticorrelation in Figure <ref>. This does not invalidate the measurement, but should be kept in mind when interpreting the plot.§.§.§ Velocity DispersionSurface density and volume density not the only relevant properties of the gas. In a turbulence-regulated view of star formation, clouds with a high Mach number has a wider density distribution and include more dense gas <cit.>. The Mach number also affects the critical density for the onset of star formation <cit.>, with a higher threshold density expected for higher Mach numbers. Specific predictions differ from model to model <cit.>, but most models predict an increase in ϵ_ ff for high ℳ. If the temperature does not vary strongly across M51, and if the line widths that we observe are primarily turbulent in nature, thenshould reflect the turbulent Mach number. In this case, if the turbulent models are right, then we would expectto correlate with . We test these expectations in the left panels of Figure <ref>. We plot τ_ Dep^ mol (top) and(middle and bottom) as a function of . We do not observe a significant correlation betweenand τ_ Dep^ mol at intermediate values of ≈ 6-12 km s^-1. At high values of , we tend to find higher τ_ Dep^ mol. That is, whereappears high, gas appears inefficient at forming stars.Normalizing by the free-fall time, the middle and bottom left panels of Figure <ref> show a steady decrease inwith increasing . The decline becoming steeper at high . The trend remains qualitatively the same for both treatments of line of sight depth. This anti-correlation is unexpected in turbulent theories. It suggests that the primary impact of the measured line width, whatever its origin, is to offer increased support against collapse rather than to increase the abundance of dense gas.Based on modeling the velocity field, <cit.> and <cit.> suggested that the line widths in M51 include substantial contributions from unresolved bulk motions. In this case,may instead indicate the strength shearing or streaming motions, which can play a key role suppressing star formation <cit.>. This seems very likely to explain the long depletion times at high(≳ 12 km s^-1). At lowerthe picture is less clear. M51 obeys the standard GMC scaling relations <cit.>, including when analyzed beam-by-beam <cit.>, so we do expect that over most of the galaxyreflects the turbulent line width to a reasonable degree (though see S. Meidt et al. submitted). In this case, Figure <ref> presents a result not expected in turbulent theory: that high line width implies a low efficiency per free fall time. Making similar measurements in other galaxies will help illuminate whether this effect is general or indeed driven by the large scale dynamics of M51.§.§.§ Dynamical State Neither the surface density nor the line width exist in a vacuum. Instead, they correlate <cit.>, so that the high τ_ Dep^ mol, highpoints in Figure <ref> are also the high surface density points seen above. Their balance, Σ / σ^2, reflects the relative strength of the gravitational potential and the kinetic energy of the gas. In almost any view of star formation, a higher degree of self-gravity will render gas better at forming stars. In turbulent theories, this manifests as a dependence of ϵ_ ff on the virial parameter <cit.>, or the closely related ratio of free-fall time to crossing time <cit.>.We capture the balance of gravitational potential and kinetic energy via ≡ / ^2 ∝ UE/KE∝α_ vir^-1. Whenis high, the surface density is high relative to the line width and the gas more tightly bound; whenis low it has a large kinetic energy compared to its inferred potential. The right panels in Figure <ref> show τ_ Dep^ mol (top) and(middle and bottom) as a function of . We observe a significant anti-correlation between τ_ Dep^ mol and . The sense of this anti-correlation is that more bound gas (high b) — equivalently, gas with a high ratio of τ_ ff to τ_ cross — forms stars at a high rate per unit gas mass (low τ_ Dep^ mol). The strength of the anti-correlation is striking given the weak and inconsistent relationships between τ_ Dep^ mol andor . A fit to the data treatingas the independent variable and using the form τ_ Dep^ mol∝^-α gives α = -0.8 to -1.0, with the range depending moderately on the resolution and approach used to determine the best-fitting relationship. Probably Does Reflect Dynamical State: As discussed above, b can be interpreted in two ways. If the line of sight depth remains constant, then b traces the dynamical state of the gas, b ∝α_ vir^-1. Alternatively, if the dynamical state of the gas remains fixed, e.g., if all gas is marginally bound or virialized, then b indicates the line of sight depth, with h ∝ b^-1. Figure <ref> offers a strong, if indirect, argument that variations indo mainly reflect changes in the dynamical state. Compare the clear, steep anticorrelation between τ_ Dep^ mol and b to the weak relation between τ_ Dep^ mol andseen in Figure <ref>. If the density of gas is the only variable relevant to star formation, then we would expect the two figures to show similar relations because ρ∝Σ / h. Instead, only b shows a strong anti-correlation with τ_ Dep^ mol. More, the slope of the anti-correlation is ∼ -0.8 to -1.0, steeper than the slope of -0.5 expected from only τ_ Dep^ mol∝ρ^-0.5. and : The importance b ∼α_vir^-1 has been highlighted by <cit.> and others <cit.>. Gas with a lower virial parameter and a higher UE/KE or b is expected to be better at forming stars. Our result broadly supports these expectations.Turbulent theories often predict an impact of α_ vir on the efficiency per free-fall time, however, not the gas depletion time. The middle and bottom right panels of Figure <ref> showas a function of . There, the impact of b is less clear. Formally, we find a weak but significant positive correlation if we hold h fixed, so thatis higher with higher . But the figure shows that this is a modest effect, and the trend reverses if we allow h to vary. <cit.> predict ϵ_ ff≈ 0.5 exp(-1.6 τ_ ff/τ_ cross) ∝exp (-1.6 b^-0.5). We show a modified version of this prediction as a black line in the figures. We take α_ vir = 5.5 / b, appropriate for clouds with R ∼ 60 pc, and set the normalization to pass through our data. Similar to the results of <cit.> in the Milky Way, the <cit.> prediction does not seem to capture the full set of physics at play in our data. Though we show in the next section that it offers a better match to the data for individual dynamical regions. §.§ Relation to Galaxy StructureM51 exhibits strong spiral and radial structure. Large scale gas flows have been linked to the ability of M51's gas to form stars <cit.> and to the suppression of star formation by streaming motions <cit.>. Figure <ref> shows how τ_ Dep^ mol andvary withandregion-by-region. Here, we color the points according to the dynamical region from which most of the CO emission in the beam originates. We show the two dimensional distributions of , , , Σ_ mol, Σ_ SFR, and (τ_ Dep^ mol)^-1 in Figure <ref>.Figure <ref> shows τ_ Dep^ mol as a function offor arm (green), interarm (purple), and central (blue) parts of the galaxy. As previously shown by <cit.>, <cit.>, and <cit.>, the cloud-scale surface density increases dramatically moving from the interarm to arm region. The center of the galaxy exhibits high gas surface densities.Although the arms concentrate molecular gas, we do not observe a decrease in τ_ Dep^ mol moving from the interarm to arm regions. Combining the arm and interarm regions, τ_ Dep^ mol remains approximately constant as a function of surface density until it rises at the highest values of . This is the apparent suppression of star formation — despite high surface densities — observed in the arms by <cit.>. These observations are also consistent with the observation by <cit.> of a weak contrast in τ_ Dep^ mol between arm and interarm regions in M51. The inner part of M51 has high , similar to that found in the spiral arms. Here, however, the high surface densities are accompanied by low τ_ Dep^ mol.As a result, in the top left panel of Figure <ref> the points at highseparate in τ_ Dep^ mol according to the region from which they arise. As Figure <ref> shows, many of the lowest τ_ Dep^ mol arise from the star-forming ring of the galaxy. These correspond to the high Σ_ SFR points in the scaling relations in Figure <ref>. The few points at the galaxy center, in which AGN contamination <cit.> and beam smearing (e.g., S. Meidt et al., in preparation) contribute most, has little effect on the overall trend.The top right panel of Figure <ref> shows that although the parts of the galaxy separate in τ_ Dep^ mol vs.space, they overlap much better when τ_ Dep^ mol is plotted as a function of . That is, the long depletion times observed at highin the arms appear to be there because that gas has low , i.e., it appears weakly gravitationally bound. We observe an anti-correlation between τ_ Dep^ mol and b in both the arm and interarm regions. The central region, which has the lower τ_ Dep^ mol, also has the strongest self-gravity, traced by b.We do observe an offset between the median τ_ Dep^ mol in the arm and interarm region at fixed b. At the same , points in the arms have typically 0.13 dex (∼ 35%) longer τ_ Dep^ mol. This could reflect evolutionary effects on scales larger than our averaging beam. For example, <cit.> show the formation of stars along spurs displaced downstream from the arms. Or it could be driven additional suppression of star formation in the arms by dynamical effects not captured by<cit.>. Alternatively, it could reflect a lower filling fraction in the interarm region, so that beam dilution affects the interarm points more, lowering b relative to its true value. It could also reflect a low level bias in our SFR tracers, which affects the lower-magnitude Σ_ SFR in the interarm more than in the arm. When recast from τ_ Dep^ mol toas a function of(bottom row), the galaxy again separates. Here the arms appear as outliers. They show low , significantly lower than the interarm region or the center. That is, given the high surface densities in the arms, we would expect collapse to proceed quickly. But the observed τ_ Dep^ mol does not support this expectation. The contrast between these lowin the arms and the higher values in the interarm regions drive the anti-correlation betweenandobserved across the whole galaxy.The bottom right panel of Figure <ref> showsas a function ofregion-by-region. When we considered the whole galaxy (Figure <ref>), only a weak correlation relatedto . Here the individual regions show a stronger positive correlation betweenand . There is some indication that at least the interarm regions match the sense of the <cit.> prediction (the black line). The picture for the arm and center regions is less clear. Together they may show a weak positive correlation betweenand , but it is not clear that they should be grouped together. The offset between the interarm and arm regions at fixedappears even stronger inthan for τ_ Dep^ mol. At fixedinterarm regions have typically ∼ 0.24 dex, almost a factor of two, higherthan arm regions with the same .Together, Figures <ref> and <ref> paint a picture of M51 that qualitatively resembles that seen in many barred galaxies: despite the high surface densities in the inner dynamical features (here the arms), gas in this region appears stabilized against collapse. But flows along the arms feed gas condensations (the star-forming ring) in the inner regions <cit.>, where star formation activity does proceed at a high level in both an absolute and normalized sense. Despite our averaging over moderately large (370 pc) areas, timescale effects may also be at play. The τ_ Dep^ mol map in Figure <ref> shows significant azimuthal structure, and as shown by <cit.>, star formation tends to occur in spur-like structures downstream of the arms. We refer the reader to extensive discussions in <cit.>, <cit.>, and <cit.>, <cit.>, and references therein, for more discussion.§ DISCUSSION AND SUMMARYWe have used the PAWS survey <cit.> to compare cloud-scale ISM structure to the locally-averaged ability of gas to form stars across the inner part of M51. We compare infrared emission, tracing molecular gas mass, to recent infrared emission, tracing the recent SFR, within each 10≈ 370 pc and 30≈ 1.1 kpc beam. Then, we use the method described by <cit.> to calculate the mass-weighted 40-pc surface density (), line width (), and self-gravity (, b ≡Σ/σ^2 ∝α_ vir^-1) in each larger beam. This is similar to recording the mass-weighted mean GMC properties in each beam, but these intensity-based measurements are simpler and require fewer assumptions than estimating cloud properties. Still they capture the key physics in the Larson scaling relations well <cit.>.We adopt simple translations between observed and physical quantities, so that our key results can be easily phrased in either observable or physical terms. Comparing CO and IR at large scales, we find:* At large scales, our CO and IR measurements qualitatively match previous studies of SFR-gas scaling relations in M51. The sublinear behavior noted by <cit.> at large radii, the superlinear behavior noted by <cit.> in the inner galaxy, and the wide range of depletion times at high gas surface density found by <cit.> are all evident in Figure <ref>. At 370 pc resolution, we still observe appreciable (∼ 0.3 dex) scatter in the CO-to-IR ratio, rising ∼ 0.4 dex at the highest surface densities. We compare the measured CO-to-IR ratio expressed as a molecular gas depletion time, τ_ Dep^ mol, to the small scale gas structure measured from PAWS to investigate if and how local gas structure drives depletion time variations. The most basic expectation, e.g., following <cit.>, is that variations in τ_ Dep^ mol result from variations in the cloud-scale density, which sets the local gravitational free fall time, τ_ ff. To test this, we compare τ_ Dep^ mol to , the mean cloud scale surface density in the beam and our best observational tracer of the gas density. We find that * The CO-to-IR ratio, tracing τ_ Dep^ mol, shows a weak anti-correlation withover the range ≈ 100-350 M_⊙ pc^-2 (Figure <ref>). Over this range, denser gas does appear moderately better at forming stars. The slope of this anti-correlation, ∼ -0.25 to -0.35, is shallower than what is naively expected for a fixed efficiency per free-fall time. With an estimate of the line of sight depth, h, ourcan be translated to a density, , and then to a gravitational free fall time, τ_ ff. Contrasting τ_ Dep^ mol and τ_ ff yields an estimate of the efficiency of star formation per free fall time, a central quantity for many recent theories of star formation. We consider what line of sight depth to use based on both recent GMC catalogs and studies of the disk thickness in M51 and the Milky Way. * In recent GMC catalogs targeting the Milky Way <cit.> and M51 <cit.> the volume density and surface density of clouds correlate well (Figure <ref>). In these catalogs, most of the CO emission arises from clouds with R ∼ 30-100 pc. The observable cloud scale surface density does appear to be a reasonable proxy for the local mean volume density, though more work is needed on this topic. We adopt both a fiducial depth h=100 pc (our best estimate) and a “dynamical” depth calculated from holding the virial parameter constant. For both cases, we calculate the distribution of efficiency per free fall time, , across the PAWS field.* At both of our working resolutions,estimated in this way is ∼ 0.3-0.36%, with ∼ 0.3 dex scatter for a 370 pc averaging beam, and ∼ 0.1 dex scatter for a 1.1 kpc averaging beam (Figure <ref>).This value agrees in broad terms with what one would infer based on comparing average GMC properties in the Milky Way and nearby galaxies<cit.> to large-scale measurements of the molecular gas depletion time <cit.>. It also matches the apparent requirements for turbulent models to match observations of dense gas, IR, and CO in nearby galaxies <cit.>. However, our inferred ϵ_ ff is much lower than values measured for the nearest molecular clouds by <cit.>, <cit.>, or <cit.> <cit.>, as well as for molecular clouds orbiting the Galactic Center by <cit.>. It is also much lower than the values commonly adopted in analytic theories and numerical simulations <cit.>.The main drivers for the mismatch with <cit.> and <cit.> appear to be sampling effects. Our method averages over all evolutionary states to calculate a regional mean τ_ Dep^ mol, while their work focuses on GMCs associated with peaks of recent star formation. The discrepancy with local clouds appears more subtle, but may be an issue of matching scales; the <cit.> measurements focus on the A_V > 2 mag material in local clouds, perhaps leading to the lower τ_ Dep^ mol and shorter τ_ ff in these clouds than are found at larger scales. The best ways to address these discrepancies appear to be high resolution extinction-robust estimates of the SFR, to allow experiments exactly matched to those of <cit.> and <cit.>, and high resolution (∼ few pc resolution) CO imaging of a large area (∼ kpc) in a nearby galaxy, to investigate the superstructure around analogs to the <cit.> clouds. Beyond only the value of , we investigate how τ_ Dep^ mol anddepends on the local cloud population and location in the galaxy. For τ_ Dep^ mol, we find:* At high > 350 M_⊙ pc^-2, the τ_ Dep^ mol increases with increasing . This leads to the unexpected result, pointed out by <cit.>, that some of the highest surface density regions of M51 show relatively weak star formation. These regions lie in the spiral arms and also have high . Their low τ_ Dep^ mol is explained, in our analysis, by the fact that this gas appears more weakly self-gravitating (lower ) than other material in M51 (Figures <ref> and <ref>). * Instead of either surface density or line width alone, τ_ Dep^ mol appears most closely related to the ratio b ≡Σ / σ^2 (Figure <ref>). Within a length scale (the line-of-sight depth through the disk),traces the strength of self-gravity, b ∝α_ vir^-1∝ UE/KE∝τ_ ff^2/τ_ cross^2. Thus, gas that appears more gravitationally bound also appears better at forming stars. The power law slope relating τ_ Dep^ mol tois τ_ Dep^ mol∝ b^β with β = -0.8 to -1.0.* All three regions of the galaxy (arm, interarm, and center) line up in τ_ Dep^ mol- space, with only a modest, 0.13 dex (∼ 35%) offset (Figure <ref>). This offset has the sense that the arm region has a modestly higher τ_ Dep^ mol (CO-to-IR) than the other regions at fixed . Our difference in results comparing τ_ Dep^ mol toandsuggest that b does indeed trace dynamical state.If both traced density, and if density represented the only important variable, then we would τ_ Dep^ mol to depend onandin the same way. Instead, τ_ Dep^ mol show a steeper, more significant relation tothan to .This apparent dependence of star formation on the dynamical state of the gas, or equivalently the virial parameter, echos findings for the Milky Way. There, the largest reservoir of high-density gas in the Galaxy is also currently the least efficient at forming stars. This phenomenon is thought to be caused by shear and the supervirial nature of the clouds <cit.>. Similarly, though our formalisms differ, our findings qualitatively agree with <cit.>, who argued that the dynamical state of the gas in M51's arms, as observed by PAWS, suppresses star formation. Our results also agrees with theoretical expectations in broad brush <cit.>. In detail, however, those models often make predictions about the efficiency of star formation per gravitational free fall time. We compareto the local cloud populations and find:* In generalappears anti-correlated withand(Figure <ref>). The anti-correlation withis weak over the range ≈ 100-350 M_⊙ pc^-2, but becomes stronger at high . The anti-correlation betweenandappears strong across the full range ofand becomes stronger at high dispersions. In general, a higher surface density and a higher line width both appear to imply lower efficiency per free-fall time in M51 (Figures <ref> and <ref>). * We find a weak positive correlation betweenandfor fixed line-of-sight depth, so that gas with higher apparent self-gravity appears to have a higher efficiency per free-fall time. Considering the whole galaxy, the strength of this correlation is weaker than the dependence predicted by the turbulent star formation law of <cit.> (Figure <ref>). *appears to correlate better with b ∝α_ vir^-1 within an individual dynamical region, particularly within the interarm region (Figure <ref>). At fixed , we findto be ∼ 0.24 dex lower in the arm regions than the interarm regions, on average. Thus relative to the expected collapse time, star formation is suppressed in the arms relative to the interarms by almost a factor of two at fixed virial parameter (Figure <ref>). Turbulent star formation models tend to predict a positive correlation between ϵ_ ff and the Mach number, related to our observed line width. They also tend to predict a strong dependence of ϵ_ ff on b ∝α_ vir^-1. Several theories have invoked an approximately fixed . Thus, in detail our observations do not show outstanding agreement with current models. However, those models include a number of additional dependencies, including on factors such as the magnetic field, character of the turbulence <cit.>. Our measurements also represent population, and so time, averages by design. So any dynamical cloud lifetime <cit.>.To facilitate comparison with such models, we include all of our measurements in Table <ref>. We emphasize that our intensity-based approach is easy to replicate with no need for cloud-finding or other complex image processing. Indeed, numerical simulations can directly match our line-of-sight approach and so marginalize over some of the geometrical uncertainties. Our approach to physical parameter estimation is simple and straightforward to treat via forward modeling. A main goal of this paper is to provide these measurements as an extragalactic benchmark for theories of star formation that consider cloud-scale gas structure.Finally, as discussed in the text and appendix, there are systematic uncertainties regarding the CO-to-H_2 conversion factor, star formation rate, and line of sight geometry. We motivate our choices in the text and appendix and test the impact of our assumptions, but these issues are standard in this field and should be born in mind when considering the results of the paper. We also anticipate refining technical details of our weighting averaging methods over the next year to better treating ensembles of line profiles and de-emphasize the impact of an extended averaging beam <cit.>. §.§ Next Steps Within the next year, it should be possible to conduct a similar analysis as we present here for M51 for a diverse sample of local galaxies. These include the other five galaxies treated by <cit.> and targets of new ALMA mapping surveys that achieve cloud-scale resolution across ∼ 10 star-forming galaxies. Such tests will establish: 1) if our observed very lowis universal, 2) if the apparent role of self-gravity traced by b is unique to M51 or a general feature, and 3) whether the gravitational free-fall time estimated from high-resolution imaging indeed appears to be a controlling parameter. Combination of these cloud-scale measurements with density-sensitive spectroscopy <cit.> will also help connect structural analysis at the GMC-scale to the internal density structure of clouds, which plays a key role in their ability to form stars.Our structural analysis follows the “beamwise” approach described in <cit.>, but a large literature exists estimating GMC properties for nearby galaxies <cit.>. Following similar studies in the Milky Way <cit.>, these measurements can be compared to τ_ Dep^ mol in a similar way to what we do here. A. Schruba et al. (in preparation) present such an analysis for a large collection of galaxies with GMC property measurements.Finally, two major observational steps could address the tension between our measurements and those of the Milky Way. First, by observing CO from a large part of a star-forming galaxy at very high spatial resolution, one could attempt to mimic the Milky Way observations with full knowledge of the surrounding medium. Second, pairing extinction robust star formation rate tracers with high resolution gas mapping would allow the kind of population studies carried out by <cit.>. The need to leverage low resolution IR maps to estimate the star formation rate limits current efforts to consider population averages at few hundred pc scales.We thank the anonymous referee for a thoughtful and constructive report. This work is based on observations carried out with the IRAM NOEMA Interferometer and the IRAMtelescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). This work was carried out as part of the PHANGS collaboration (formerly SFNG) and the PAWS collaboration. The work of AKL, MG, and JS is partially supported by the National Science Foundation under Grants No. 1615105, 1615109, and 1653300. ES acknowledge financial support to the DAGAL network from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme FP7/2007- 2013/ under REA grant agreement number PITN-GA-2011-289313. ES acknowledges funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 694343). AH acknowledges support from the Centre National d'Etudes Spatiales (CNES). JMDK and MC gratefully acknowledge financial support in the form of an Emmy Noether Research Group from the Deutsche Forschungsgemeinschaft (DFG), grant number KR4801/1-1 (PI: Kruijssen). JMDK acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme via the ERC Starting Grant MUSTANG (grant agreement number 714907, PI Kruijssen). GB is supported by CONICYT/FONDECYT, Programa de Iniciacion, Folio 11150220. AU acknowledges support from Spanish MINECO grants AYA2012-32295 and FIS2012-32096. FB ac- knowledges support from DFG grant BI1546/1-1. SGB thanks economic support from grants ESP2015-68964-P and AYA2016-76682-C3-2-P.Our results depend on estimates of the recent star formation rate and molecular gas mass. We adopt simple approaches to each, utilizing the total infrared (TIR) emission as a tracer of the star formation rate and adopting a Galactic α_ CO = 4.35  to convert CO (1-0) intensity in molecular gas mass surface density.§ OTHER APPROACHES TO THE STAR FORMATION RATE We use TIR intensity as our tracer of SFR. At θ = 30, we calculate Σ_ TIR using four bands and the SED-fitting based prescription of <cit.>. At θ=10, we use a linear translation of I_70 into Σ_ TIR, with the coefficient derived from comparing 70 μm intensity to TIR intensity at 30 resolution. We then translate Σ_ TIR to Σ_ SFR following <cit.>.The main impact of Σ_ SFR in this paper is on the estimate of τ_ Dep^ mol. To assess the impact of our choice of estimator, Figure <ref> shows the effect on τ_ Dep^ mol of replacing our adopted TIR-based SFR with estimates using a different approach. We only have access to all of the required data at θ=30, so this plot shows only results for that resolution over the PAWS field, our area of interest.First, we show results using only 70 μm emission and the formulae quoted in Sections <ref> and <ref>. This is our approach at θ=10, where Herschel's 70 μm map is our only available IR band. We also show results using only 24 μm, using Hα assuming one magnitude of extinction, hybridizing Hα and 24 μm emission, and combining Hα with 24 μm after subtracting a “cirrus” (non star-forming) component from the 24 μm emission. Except for the 70 μm emission, the prescriptions used for the other tracers are taken from <cit.>, which builds heavily on <cit.> and <cit.>. We use the gas based cirrus prediction, which assumes a typical dust-to-gas ratio and that all of the gas is illuminated by a radiation field 0.6 times that found in the Solar Neighborhood. The final panel shows the result for quadrupling the radiation field used in the cirrus estimate. Including an FUV-based hybrid <cit.> would not add much to the analysis given the heavily extinguished nature of the region in question <cit.>. Each panel quotes the median and scatter in the logarithm of the ratio between τ_ Dep^ mol estimated using this other tracer to that used in the main body of the paper.The figure shows that the IR-based estimates agree well with one another and yield higher Σ_ SFR than estimates using Hα. Indeed, the main result of changing the SFR tracer is usually to lower Σ_ SFR, thereby increasing τ_ Dep^ mol. The magnitude of the shift is a factor of ∼ 2 if only Hα with 1 mag of extinction is used or a very large cirrus component is adopted (which also amounts to only weakly correcting Hα for extinction). A main result of our analysis is a low . Lower Σ_ SFR and higher τ_ Dep^ mol would driveto even lower values. In detail, given the gas-rich, dusty nature of the inner few kpc of M51, we do not necessarily expect these lower Σ_ SFR estimates to be more correct, but if they are then it would not change our qualitative conclusions.Note that data at highertend to show a larger discrepancy between IR-based SFR estimates and Hα with little or no correction. The sense of this trend is that most alternatives to the IR-based Σ_ SFR would yield longer τ_ Dep^ mol at higher . The result would be an even lowerat highthan we already observe. That is, none of the alternatives in Figure <ref> push the data towards a more nearly fixed . More, recall that Figure <ref> shows that our IR based approach yields measurements that overlap the Paα+24μm-based estimates from <cit.>. They studied selected apertures, while we sample the whole inner disk, so there are methodological differences. But the overall magnitude of both the gas and SFR estimates agree well. Finally, note from the first panel that τ_ Dep^ mol estimated using only 70 μm emission and our adopted scaling agrees very well with that estimated using the four band <cit.> fit. That is, the approach that we use at θ = 30 agrees well with that which we are forced to use θ = 10. The median ratio agrees by construction, but the small scatter gives us confidence in our use of 70 μm emission and our application of Equation <ref> <cit.>.§ THE CO-TO-H_2 CONVERSION FACTOR We translate CO (1-0) emission into molecular mass assuming a fixed α_ CO = 4.35 . This value is supported by multi-line <cit.> and cloud virial mass <cit.> studies. <cit.> provide a thorough summary of the literature on α_ CO in M51, which has so far yielded results that break down into either an approximately Galactic conversion factor or values ∼ 0.5 times Galactic. If the lower α_ CO holds, there would be less molecular gas mass than we infer in the main paper, and a shorter τ_ Dep^ mol. This would increaseby (α_ CO / α_ MW)^-1.5, because the conversion factor also affects the density and so τ_ ff∝ 1/ √(ρ).Figure <ref> shows that an approximately Galactic conversion factor is also supported by the dust-based approach of <cit.> and <cit.>. We compare Σ_ dust, the dust mass surface density estimated from Herschel multi-band data, to the measured CO intensity and the Hi column density from VLA imaging. The CO map is the PAWS single dish map, the Hi map comes from THINGS <cit.>. The dust maps is the result of fitting using the <cit.> models to the Herschel and Spitzer photometry, following <cit.> and modified by the correction to dust mass suggested in <cit.>.For this application, we assume that the dust-to-gas ratio is constant over the range r_ gal = 1-8 kpc. The approximately constant metallicity of the galaxy supports this assumption <cit.>. The figure shows that α_ CO≈α_ MW = 4.35  yields an approximately flat dust-to-gas ratio as a function of radius. A lower conversion factor, as suggested by <cit.> yields a strong gradient in dust-to-gas ratio as a function of radius. The right panel shows the formal results of minimizing scatter in DGR while varying α_ CO treating either each ring (black) or each θ = 30 line of sight (gray) as independent measurements. Both approaches yield a best fit α_ CO≈ 4.5-5.0 . Uncertainties apply to this dust-based approach, including phase- or density-dependent depletion <cit.>, emissivity variations <cit.>, and the presence of sufficient dynamic range in the H_2/Hi ratio to achieve a good fit <cit.>. The interplay of these uncertainties with α_ CO variation are discussed at length in <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>, and are beyond the scope of this paper. The key point, for us, is that the best current available dust and gas maps suggest our adopted α_ CO≈α_ MW to represent a reasonable choice.
http://arxiv.org/abs/1706.08540v1
{ "authors": [ "Adam K. Leroy", "Eva Schinnerer", "Annie Hughes", "J. M. Diederik Kruijssen", "Sharon Meidt", "Andreas Schruba", "Jiayi Sun", "Frank Bigiel", "Gonzalo Aniano", "Guillermo A. Blanc", "Alberto Bolatto", "Mélanie Chevance", "Dario Colombo", "Molly Gallagher", "Santiago Garcia-Burillo", "Carsten Kramer", "Miguel Querejeta", "Jerome Pety", "Todd A. Thompson", "Antonio Usero" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170626180035", "title": "Cloud Scale ISM Structure and Star Formation in M51" }
Department of Communication Systems Engineering,Ben-Gurion University of the Negev,Beer-Sheva 84105, Israel Email:[email protected], [email protected] and managing queues with bounded buffers are among the most fundamental problems in computer networking. Traditionally, it is often assumed that all the properties of each packet are known immediately upon arrival. However, as traffic becomes increasingly heterogeneous and complex, such assumptions are in many cases invalid. In particular, in various scenarios information about packet characteristics becomes available only after the packet has undergone some initial processing. In this work, we study the problem of managing queues with limited knowledge. We start by showing lower bounds on the competitive ratio of any algorithm in such settings. The techniques used in our proofs, which make use of a carefully crafted Markov process, may be of independent interest, and can potentially be used in other similar settings as well. Next, we use the insight obtained from these bounds to identify several algorithmic concepts appropriate for the problem, and use these guidelines to design a concrete algorithmic framework. We analyze the performance of our proposed algorithm, and further show how it can be implemented in various settings, which differ by the type and nature of the unknown information. We further validate our results and algorithmic approach by an extensive simulation study that provides further insights as to our algorithmic design principles in face of limited knowledge.Buffer management queueing scheduling uncertainty limited knowledge competitive analysis online algorithms § INTRODUCTION An earlier version of this work was published in <cit.>. This work adds full proofs of all theorems, stronger lower bounds, an improved competitive algorithm, and an extended simulation study.Some of the most basic tasks in computer networks involve scheduling and managing queues equipped with finite buffers, where the primary goal in such settings is maximizing the throughput of the system. The always-increasing heterogeneity and complexity of network traffic makes the challenge of maximizing the throughput ever harder, as the packet processing required in such queues spans a plethora of tasks including various forms of DPI, MPLS and VLAN tagging, encryption / decryption, compression / decompression, and more.The most prevalent assumption in the research studying these problems is that the various properties of any packet – e.g., its QoS characteristic, its required processing, its deadline – are known upon its arrival. However, this assumption is in many cases unrealistic. For instance, when a packet is recursively encapsulated a few times by MPLS, PBB, 802.1Q, GRE or IPSec, it is hard to determine in advance the total number of processing cycles that such a packet would require <cit.>. Furthermore, the QoS features of a packet are commonly determined by its flow ID, which is in many cases known only after parsing <cit.>.In data center network architectures such as PortLand <cit.>, ingress switches query a cache for an application-to-location address resolution. A cache miss, which is unpredictable by nature, results in forwarding of the packet to the switch software or to a central controller, which performs a few additional processing cycles before the packet can be transmitted. Similarly, in the realm of Software Defined Networks, ingress switches query a cache for obtaining rules for a packet <cit.>, which may also depend on priorities <cit.>. In such a case, a cache miss results in additional processing until the rules are retrieved and the profit from the packet is known.In spite of this increased heterogeneity, and the fact that the processing requirement of a packet might not be known in advance, these characteristics usually become known once some initial processing is performed. This behavior is common in many of the applications just described. Furthermore, for traffic corresponding to the same flow, it is common for characteristics to be unknown when the first few packets of the flow arrive at a network element, and once these properties are unraveled, they become known for all subsequent packets of this flow. It therefore follows that only part of the arriving packets has unknown characteristics upon arrival, which become known after parsing.In this work we address such scenarios where the characteristics of some arriving traffic are unknown upon arrival, and are only revealed when a packet has undergone some initial processing (parsing), “causing the mist to clear”. We model and analyze the performance of algorithms in such settings, and in particular we develop online scheduling and buffer management algorithms for the problem of maximizing the profit obtained from delivered packets, and provide guarantees on their expected performance using competitive analysis.We focus on the general case of heterogeneous processing requirements (work) and heterogeneous profits <cit.>. We assume priority queueing, where the exact priorities depend on the specifics of the model studied. We present both algorithms and lower bounds for the problem of dealing with unknown characteristics in these models. Furthermore, we highlight some design concepts for settings where algorithms have limited knowledge, which we believe might be applicable to additional scenarios as well.As an illustration of the problem, assume we have a 3-slots buffer, equipped with a single processor, and consider the arrival sequence depicted in Fig. <ref>. In the first cycle we have seven unit-size packets arriving, out of which three will provide a profit of 5 upon successful delivery, each requiring 5 processing cycles (work). The characteristics of these three packets are known immediately upon arrival. The characteristics of the remaining four packets (marked gray) are unknown upon arrival. We therefore dub such packets U-packets (i.e., unknown packets). Each of these four U-packets may turn out to be either a "best" packet, requiring minimal work and having maximal profit; a "worst" packet, requiring maximal work and having minimal profit; or anything in between. Thus, already at the very beginning of this simple scenario, any buffering algorithm would encounter an admission control dilemma: how many U-packets to accept, if any? This dilemma can be addressed by various approaches including, e.g., allocating some buffer space for U-packets, accepting U-packets only when current known packets in the buffer are of poor characteristics, in terms of profit, or of profit to work ratio, etc. In case that the algorithm accepts U-packets, an additional question arises: which of the U-packets to accept into the buffer? Obviously, for any online deterministic algorithm there exists a simple adversarial scenario, which would cause it to accept only the "worst" U-packets (namely, packets with maximal work and minimum profit), while an optimal offline algorithm would accept the best packets. This motivates our decision to focus our attention on randomized algorithms.We now turn to consider another aspect of handling traffic with some unknown characteristics. Assume the scenario continues with 5 cycles without any arrival, and then a cycle with an identical arrival pattern - namely, three known packets with both work and profit of 5 per packet, and four U-packets. This sheds light on a scheduling dilemma: which of the accepted packets should better be processed first? every scheduling policy impacts the buffer space available in the next burst. For instance, a run-to-completion attitude would enable finishing the processing of one known packet by the next burst, thus allowing space for accepting a new packet without preemption. However, one may consider an opposite attitude - namely, parsing as many U-packets as possible, thus "causing the mist to clear", allowing more educated decisions, once there are new arrivals. In terms of priority queuing, this means over-prioritizing someU-packets, and allowing them to be parsed immediately upon arrival. We further develop appropriate algorithmic concepts based on the insights from this illustrative example in Section <ref>. §.§ System Model Our system model consists of four main modules, namely, [(a)]* an input queue equipped with a finite buffer,* a buffer management module which performs admission control* a scheduler module which decides which of the pending packets should be processed, and* a processing element (PE), which performs the processing of a packet. We divide time into discrete cycles, where each cycle represents a fixed time slot,and consists of three steps: [(i)]* The transmission step, in which fully-processed packets leave the queue,* the arrival step, in which new packets may arrive, and the buffer management module decides which of them should be retained in the queue, and which of the currently buffered packets should be pushed-out and dropped, and finally* the processing step, in which the scheduler assigns a single packet for processing by the PE, which in turn processes the packet. We consider a sequence of unit-size packets arriving at the queue. Upon its arrival, the characteristic of each packet may be known - in which case we refer to the packet as a K-packet (i.e., known packet); or unknown- in which case we refer to the packet as a U-packet (i.e., unknown packets). We let M denote the maximum number of U-packets that may arrive in any single cycle. We focus our attention on the case where M>0, unless specifically stated otherwise.Each arriving packet p has some [(1)]* intrinsic benefit (profit) v(p) ∈1,…,V, and* required number of processing cycles (work), w(p) ∈, +1, … ,W. Unless explicitly stated otherwise, we consider the most general case, namely, = 1. To simplify the expressions throughout the paper, we assume that both V and W are powers of 2.[Our results degrade by a mere constant factor otherwise.] We use the notation (w,v)-packet to denote a packet with work w and profit v. We note that the uniform case where all packets require the same amount of work, and all packets have the same profit, is trivial, since the simple run-to-completion policy is optimal. We therefore focus our attention on non-uniform traffic.In our model, similarly to <cit.>, upon processing a U-packet for the first time, its properties become known. We therefore refer to such a first processing cycle of a U-packet as a parsing cycle. Non-parsing cycles where the processor is not idle are referred to as work cycles.The queue buffer can contain at most B packets. We assume B ≥ 2, since the case where B = 1 is degenerate. The head-of-line (HoL) packet at time t (for a given algorithm ) is the highest priority packet stored in the buffer just prior to the processing step of cycle t, namely, the packet to be scheduled for processing in the processing step of t. We say the buffer is empty at cycle t if there are no packets in the buffer after the transmission step of cycle t.We study queue management algorithms, which are responsible for both the buffer management and the scheduling of packets for processing. In particular, we focus our attention on algorithms targeted at maximizing the throughput of the queue, i.e. the overall profit from all packets successfully transmitted out of the queue. The throughput of algorithmis denoted by TP(). We use the terms throughput and performance interchangeably. We evaluate the performance of online algorithms using competitive analysis <cit.>. An algorithmis said to be c-competitive if for every finite input sequence σ, the throughput of any algorithm for this sequence is at most c times the throughput of(c ≥ 1). We letdenote any (possibly clairvoyant) algorithm attaining optimal throughput. An algorithm is said to be greedy if it accepts packets as long as there is available buffer space. We further focus our attention on work-conserving algorithms, i.e., algorithms which never leave the PE idle unnecessarily. §.§ Related WorkCompetitive algorithms for scheduling and management of bounded buffers have been extensively studied for the past two decades. The problem was first introduced in the context of differentiated services, where packets have uniform size and processing requirements, but some of the packets have higher priorities, represented by a higher profit associated with them <cit.>. The numerous variants of this problem include models where packets have deadlines or maximum lifetime in the switch <cit.>, environments involving multi-queues <cit.> and cases with packets dependencies <cit.>, to name but a few. An extensive survey of these models and their analysis can be found in <cit.>.While traditionally it was assumed that packets have heterogeneous profits but uniform work (processing requirements), some recent work introduced the complementary problem, of uniform profits with heterogeneous work <cit.>. This work presented an optimal algorithm for the fundamental problem, as well as online algorithms and bounds on the competitive ratio for numerous variants. Subsequent research investigated related problems with heterogeneous work combined with heterogeneous packet sizes <cit.>, or with heterogeneous profits <cit.>. In particular, <cit.> showed that the competitive ratio of some straight-forward deterministic algorithms for the problem of heterogeneous work combined with heterogeneous profits is linear in either the maximal work W, or in the maximal profit V, even when the characteristics of all packets are known upon arrival. These results motivate our focus on randomized algorithms.While most of the literature above assumed that all the characteristics of packets are known upon arrival, this assumption was put in question recently <cit.> by noting that it is often invalid. However, the main problem addressed in <cit.> revolved around developing schemes for transmitting packets of the same flow in-order, while our work focuses on maximizing throughput with limited buffering resources, and designing both buffer management and scheduling policies targeted at this objective.Maybe closest to our work are the recent studies considering serving in the dark <cit.>, which investigate an extreme case where the online algorithm learns the profit from a packet only after transmitting it. These studies consider highly oblivious algorithms, whereas our model and our proposed algorithms dwell in a middle-ground between the well studied models with complete information, and these recent oblivious settings. Our work further considers traffic with variable processing requirements, whereas <cit.> focus on settings where all packets require only a single processing cycle, and they differ only by their profit.The problem of optimal buffering of packets with variable work is closely related to the problem of job scheduling in a multi-threaded processor, which was extensively studied. A comprehensive survey of online algorithms for this problem can be found in <cit.>. This body of work, however, differs significantly from our currently studied model. The major differences are that packet buffering has to deal with limited buffering capabilities, and is targeted at maximizing throughput. Processor job scheduling, however, usually has no strict buffering limitations, and is mostly concerned with minimizing the response time. §.§ Our ContributionWe introduce the problem of buffering and scheduling which aims to maximize throughput where the characteristics of some of the packets are unknown upon arrival. We focus our attention on traffic where every packet has some required processing cycles, and some profit associated with successfully transmitting it. We make no assumption on the underlying process generating traffic, thus rendering our results globally applicable.In Section <ref> we present lower bounds on the performance of any randomized algorithm for the problem. Specifically, we show that no algorithm can have a competitive ratio better than Ω(minWV,M), even against an adversary which can accommodate merely 2 packets in its buffer, where W and V denote the maximum work and profit of a packet, respectively, and M represents the maximum number of unknown packets which may arrive in any single cycle. We also prove stronger lower bounds for the general settings using a novel technique, in which we bound the expected number of packets in the buffer of an optimal offline algorithm by means of a Markov process.In Section <ref> We describe several algorithmic concepts tailored for dealing with unknown characteristics in such systems. We follow by presenting an algorithm that applies our suggested algorithmic concepts in Section <ref>. For the most general case we prove our algorithm has a competitive ratio of O(M log V log W). We further show how to improve this bound in several important special cases.In Sections <ref>-<ref> we present some modifications and heuristics applicable to our algorithm that, while leaving the worst-case guarantees intact, are designed to improve performance compared to the baseline algorithmic design. The modified algorithm can cope with cases where neither the maximal amount of work and profit, nor the maximum number of unknown packets per cycle, are known in advance.We further validate and evaluate the performance of our proposed algorithms in Section <ref> via an extensive simulation study. Our results highlight the effect the various parameters have on the problem, well beyond the insights arising from our rigorous mathematical analysis.We conclude in Section <ref> with a discussion of our results, and also highlight several interesting open questions.§ LOWER BOUNDS In this section we present lower bounds on the competitive ratio of any randomized algorithm for our problem.These lower bounds serve two main objectives: [(i)]* They represent the best competitive ratio which one can hope to achieve; and* the hard scenarios used in the proofs of these lower bounds highlight the challenges which any competitive online algorithm would have to tackle. §.§ Highly-restricted adversaries In this section we prove lower bounds on the competitive ratio of any online algorithm for our problem, compared to a highly-restricted adversary which uses a buffer which can only store a single packet. This restriction on the amount of buffer space available for the adversary enables us to better highlight the scaling laws of the problem, depending on the various parameters. If V ≥ 1, M ≥ 1 and the work of each packet is w(p) ∈, +1, …, W where W ≥ 2, then the competitive ratio of any randomized algorithmfor non-uniform traffic is at leastV (W-1)/2[1 - (1- 1/V(W-1)+1-)^M], even against an optimal offline algorithm which has a buffer which can only store a single packet.Since traffic is non uniform, we are guaranteed to have V(W-1) + 1 - ≠ 0.We prove the theorem using Yao's method <cit.>, where we define a carefully crafted distribution over arrival sequences, and show a lower bound on the ratio between the expected performance of an optimal clairvoyant algorithm for the problem, and the expected performance of any deterministic algorithm for the problem.We will show that the claim is true even if the optimal offline algorithm uses a buffer that can hold only a single packet. We define the following collection of arrival sequences, where each arrival sequence has two phases: a Fill phase, and a Flush phase. The Fill phase consists of iterations as follows. Each iteration begins withcycles without arrivals; and continues withcycles with M U-packets arriving per cycle, where each packet is a -packet with probability p, and a -packet with probability (1-p), for some constant p to be determined later. The total number of cycles during the fill phase is N, where N is a large integer, so we have N/2 iterations. Once the fill phase ends, it is followed by the Flush phase, which consists of BW cycles without arrivals. We note that due to the random choices of packets being either -packets or -packets, the above structure induces a distribution over a collection of possible arrival sequences.To simplify our analysis, we define thepolicy, which works as follows: Within the fill phase, during each iteration,accepts at most one -packet which has arrived during the iteration, if such a packet exists. This packet is the one considered picked byin that iteration. Starting from the second iteration, during the firstcycles of each iteration,processes the packet it picked during the previous iteration (if such a packet exists), and transmits it. During the flush phase,processes and finally transmits the packet it picked during the last iteration.It should be noted thatis neither greedy, nor work conserving. Moreover, the expected throughput ofclearly serves as a lower bound on the expected optimal throughput possible.We have N/2 iterations, and the probability thatsuccessfully picks a -packet during an iteration is exactly the probability of there being a -packet arriving during that iteration, which is 1 - (1-p)^M. The throughput of , which we recall is denoted by TP(), therefore satisfiesTP() ≥NV/2[1 - (1-p)^M] We now turn to consider the expected performance of any deterministic algorithmfor the problem. We first assume thatbegins the flush phase with a buffer full of -packets, all of them unparsed. This provideswith a profit of BV during the flush phase, while still having N processing cycles during the fill phase for processing additional packets. This profit is clearly an upper bound on the maximum possible throughput attainable byfrom packets transmitted during the flush phase, regardless of when they were processed. For evaluating the gain ofduring the fill phase, it therefore suffices to consider only packets whichfully processes during this phase.Consider now the profit offrom packets transmitted during the fill phase. Recall that we assume thatis work-conserving. We assume thatis also greedy, that is,never discards a packet when its buffer is not full; being greedy cannot decrease 's performance.has packets to process during the entire fill phase, except for the firstcycles (where there are no arrivals yet), namely, for N' = N - cycles. Furthermore, sinceis assumed to always accept packets when the buffer is not full, and is work conserving, there exists some 0 < r ≤ 1 such that the number of parsing, and work, cycles performed byare N'r, and N'(1-r), respectively.Consider a case wherereveals a -packet q. Then, processing q and finally transmitting it would surely not decrease the throughput ofwhen contrasted with the alternative of dropping q. Thus, the best deterministic algorithmwould work at least -1 work cycles per each parsing cycle, in which a -packet is parsed (recall that we are merely interested in packets, whichfully processes and transmits during the fill phase). Therefore, the total number of work cycles contributing to the transmission of such packets is at least -1 times larger then the expected number of parsing cycles, in which a -packet is revealed: N'(1-r) ≥ N'rp(-1).If the total number of work cycles during the fill phase exceeds the number of cycles which are necessary for transmitting all the parsed -packets,may work also on -packets. Namely, if N'(1-r) > N'rp(-1), thenmay work on -packets for N'(1-r) - N'rp(-1) cycles, transmitting at most one -packet once in W-1 such cycles.Combining the above reasoning we conclude that the overall throughput ofsatisfiesTP()≤ N'rpV + N'(1-r) - N'rp(-1)/W-1 + BV= (N - ) [Vrp + (1-r) - rp(-1)/W-1] + BV Considering the ratio between the lower bound on the expected performance of(as captured by Eq. <ref>) and the upper bound on the expected performance of(as captured by Eq. <ref>) and letting N →∞, we conclude that no algorithm can have a competitive ratio better thanV(W-1)/2·1 - (1-p)^M/Vrp(W-1)+ 1-r -rp(-1)By choosing p^* = [ V(W-1)+1-]^-1, the result follows. We now aim to relate the lower bound established in Theorem <ref> to a simpler and more intuitive function of M, V and W. We do so by means of two propositions, which relate the bound to either Ω(M) or Ω(VW) for different ranges of M. In the propositions we use our notation p^* = [ V(W-1)+1-]^-1 from the proof of Theorem <ref>.Using this notation, note that Theorem <ref> shows that the competitive ratio is at leastV(W-1)/2[1 - (1- p^* )^M]. In the proofs of both propositions we will repeatedly use the following simple inequality, which holds for any ≥ 1:•1/V(W-1) = 1/1/p^* +- 1≤ p^*.The following proposition shows that if M is relatively small, then the lower bound established in Theorem <ref> is Ω (M).If V ≥ 1, ≥ 1, W ≥ 2 and 1 ≤ M ≤V(W-1)/, thenV(W-1)/2[1 - (1- p^* )^M] ≥M/4 We show by induction on n that for any 1 ≤ n ≤ V(W-1)(1-p^*)^n ≤ 1 - n/2V(W-1).By setting n=M ·, which is at most V(W-1) by our assumption on M, and applying some algebraic manipulation, the result follows.For n=1, Eq. <ref> reduces to requiring that 1/2V(W-1)≤ p^*, which holds true due to Eq. <ref>. For the induction step, by the induction hypothesis on n we have(1-p^*)^n+1≤(1 - p^*) [1 - n/2 V(W-1)].It therefore suffices to prove that(1 - p^*) [1 - n/2 V(W-1)] ≤ 1 - n+1/2 V(W-1),which is equivalent to requiring that1/2 V(W-1)≤ p^* [ 1 - n/2 V(W-1)].By Eq. <ref> we have 1/2 V(W-1)≤p^*/2, which implies that it suffices to show thatp^*/2≤ p^* [ 1 - n/2 V(W-1)]which is satisfied for every n ≤ V(W-1).The following proposition shows that if M is relatively large, then the lower bound established in Theorem <ref> is Ω(VW/).If V ≥ 1, ≥ 1, W ≥ 2 and M > V(W-1)/, thenV(W-1)/2[1 - (1- p^* )^M] > e-1/4e·VW/ By our assumption on M, and using Eq. <ref>, we have M · > V(W-1) ≥1/p^*. It follows that M · = a 1/p^* for some a >1, which in turn implies that(1 - p^* )^M · = [ (1 - p^*)^1/p^*]^a ≤ e^-a <e^-1.It follows thatV(W-1)/2[1 - (1- p^* )^M ·]≥V(W-1)/2(1 - 1/e) = VW/2·W-1/W(1 - 1/e) ≥e-1/4eVW/ Assigning =1 in Theorem <ref> and Propositions <ref> and <ref> implies the following corollary:The competitive ratio of any randomized algorithm is Ω(minVW,M). In the special case of uniform-profits, we are essentially interested in maximizing the overall number of packets successfully transmitted. Therefore we may assign V=1 in Corollary <ref>, implying the following corollary: In the case of uniform-profits, the competitive ratio of any randomized algorithm is Ω(minW,M). In the special case of uniform-work, we can assign = W in Propositions <ref> and <ref>, implying the following corollary:In the case of uniform-work, the competitive ratio of any randomized algorithm is Ω(minV,M).§.§ Non-restricted adversariesIn Section <ref> we assumed that the optimal algorithm has a buffer capacity of storing only one packet. This assumption significantly simplified the proofs there. In this section we relax this assumption, and show a stronger bound for the general, and more natural case, where the size of the buffer available to the optimal algorithm is identical to the size that available to the online algorithm. We use again Yao's method <cit.>, which we used in the proof of Theorem <ref>. Furthermore, we use the same scenario and algorithm , defined in Section <ref>. However, as we now allowto store multiple packets in its buffer, can increase its expected throughput by buffering -packets whenever the number of arriving -packets in a single iteration is larger than one, and processing them in iterations where no -packets arrive. We now evaluate the performance in such settings.Denote by q_j the state where there are j -packets in the buffer ofat the beginning of an iteration. Note, that when j > 0, the count represented by q_j also includes the packet, which is to be transmitted during the iteration. Namely,successfully transmits a packet in every iteration, unless its buffer's state is q_0.We now turn to describe the transition matrix. Denote the probability of having exactly k -packets arriving during one iterationby _k. In each iteration we have M · arriving packets (M packets per cycle, timescycles per iteration) which are i.i.d. where each packet is a -packet with probability p. Therefore _k = M ·kp^k (1-p)^M · - k when 0 ≤ k ≤ M and _k = 0 otherwise.Then, the transition matrix is Π = {_0 _1 _2 … _B-1 1 - ∑_j=0^B-1_j _0 _1 _2 … _B-1 1 - ∑_j=0^B-1_j0_0 _1 … _B-2 1 - ∑_j=0^B-2_j … … … … … …0…0 _0 _1 1 - ∑_j=0^1_j0 0 …0_0 1 - _0 }where Π_ij is the probability of transition from state i to state j for each 0 ≤ i, j ≤ B. Π is irreducible, because it is possible to get from any buffer state to any other buffer state by some arrival sequence. Π is also aperiodic, because its diagonal is non-zero, which represents the fact that if the buffer contains i packets at the beginning of a certain iteration, there exists a positive probability that it would contain i packets also at the beginning of the next iteration. Furthermore, as Π is finite, irreducible and aperiodic, it is also ergodic, namely, there exists a steady state. For a long enough input sequence, we can neglect the transient "warm-up" period, and assume that the expected number of iterations wheregains nothing during phase 1 is N/2 · p_0, where p_0 is the probability thatis in state q_0. In the rest of the iterations in phase 1gains V per iteration. Therefore, the expected throughput ofsatisfiesTP() ≥N/2 · V(1 - p_0) The expected throughput of remains the same as in Eq. <ref>. In order to obtain the competitive ratio for the fully heterogenous case, we divide Eq. <ref> by Eq.  <ref> and assign again =1 and p^* = 1/V(W-1). Then, when N →∞ the competitive ratio is c ≥V/2(W-1)(1 - p_0). We find p_0 by solving the balance equations defining the steady state of the system, i.e., finding the eigenvector of the transition matrix Π. Fig. <ref> depicts the lower bounds as a function of M when V = W = 10 for various buffer sizes. Recall that the probability of a certain packet to be a -packet is p_* = 1/V(W-1) = 1/90. Therefore only when M is large enough, the expected number of -packets per iteration is sufficient for allowingto really take advantage of its buffer for increasing its performance, resulting in a stronger lower bound on the competitive ratio. In the next section we use the insight obtained from the analysis in the current section to identify several algorithmic concepts appropriate for the problem of buffering with limited knowledge.§ ALGORITHMIC CONCEPTS In this section we describe the algorithmic concepts underlying our proposed algorithms for dealing with scenarios of limited knowledge. *Random selection For obtaining a good competitive ratio we would like to avoid a scenario wheresuccessfully transmits a bulk of “good” packets, which are originally unknown, while having the online algorithm discard all these packets. This translates to assuring each arriving U-packet has some minimal probability of being accepted and parsed. *Speculatively Admit Competitive algorithms must ensure they retain throughput from both K-packets and U-packets. Furthermore, once a U-packet is accepted, there is a high motivation to reveal its characteristics as soon as possible, thus making educated decisions in the next cycles.We therefore propose to speculatively over-prioritize unknown packets over known packets in certain cycles. We refer to the act of over-prioritizing an unknown packet p in some cycle t as admitting p. Respectively, we refer to such a cycle t as an admittance cycle, and to such a packet p as an admitted packet. *Classify and randomly select Intuitively, as unknown packet characteristics are drawn from a wider range of values, the task of maximizing throughput becomes harder, especially when compared to the optimal throughput possible. To deal with this diversity, we apply a Classify and Randomly Select scheme <cit.>.This approach is based on the following notion: Assume we have an algorithmwhich is guaranteed to be c-competitive if traffic is sufficiently uniform, i.e., for cases where traffic characteristics are within some well-defined range of values. Given some arbitrary input sequence, which might be highly heterogeneous, we virtually partition the sequence of arriving packets into N>1 disjoint sub-sequences, which we refer to as classes, such that each class is sufficiently uniform, i.e., for any specific class 1 ≤ i ≤ N the characteristics of packets corresponding to class i are within some well-defined range of values (as prescribed by ). The scheme then dictates selecting one of the classes uniformly at random, and applyingto this class, while ignoring all packets corresponding to other classes. One then shows that the overall competitive ratio of this randomized approach is O(N· c)-competitive for the overall input sequence.*Alternate between fill & flush This paradigm is especially crucial in cases of limited information. The main motivation for this approach is that whenever a “good” buffer state is identified, the algorithm should focus all its efforts on monetizing the current state, maybe even at the cost of dropping packets indistinctly. In terms of buffer management and scheduling, this translates to defining some periods, in which the algorithm processes and transmits all the packets in its buffer, even at the cost of discarding all the arrivals. If these flush periods are short enough, the algorithm gains the high throughput from flushing its buffer, yet without compromising too much throughput due to having packets discarded during the flush.§ COMPETITIVE ALGORITHMS In this section we present a basic competitive online algorithm for the problem of buffering and scheduling with limited knowledge. We first provide a high-level description of our algorithm, and then turn to specify its details and analyze its performance.For simplicity of analysis and algorithm presentation, we assume that the set of possible values of W and V – the work and profit per packet, respectively – are known to the algorithm in advance. In Sections <ref> and <ref> we show how to remove this assumption without harming the performance of our algorithm, and present several improved variants of this algorithm. We further note that neither of our proposed solutions require knowing the value of M – the maximum number of unknown packets arriving in a single cycle – in advance. §.§ High-level Description of Proposed AlgorithmOur algorithm is designed according to the algorithmic concepts presented in Section <ref> as follows. *Randomly select and speculatively admit In every cycle t during which a U-packet arrives, the algorithm picks t as an admittance cycle with some probability r (to be determined in the sequel). In every cycle chosen as an admittance cycle, the algorithm picks exactly one of the U-packets arriving at t to serve as the admitted packet.This U-packet is chosen uniformly at random out of all U-packets arriving at t. At the end of the arrival step, the algorithm schedules the admitted U-packet (if one exists) for processing, hence parsing the packet. We note that if no such U-packet exists, or if t is not an admittance cycle, then the algorithm may only accept known arriving packets, and would eventually schedule the top-priority packet residing in the Head-of-Line (HOL) for processing. The exact notion of priority will be detailed later. *Classify and randomly select We implicitly partition the possible types of arriving packets into classes C_1, C_2, … C_m; the criteria for partitioning and the exact value of m will be specified later. Our algorithm picks a single selected class, uniformly at random from the m classes. Our goal is to provide guarantees on the performance of our proposed algorithm for packets belonging to the selected class, which is henceforth denoted . Packets which belong to the selected class are referred to as -packets. Following our previously introduced notation, known (unknown) packets that belong to the selected class, i.e., -packets for which their attributes are known (unknown), are denoted as -packets (-packets).Focusing solely on packets belonging tomay seem like a questionable choice, especially if there are few packets arriving which belong to this class, or if the characteristics of packets belonging to this class are poor (e.g., they have low profit and require much work). However, this naive description is meant only to simplify the analysis. In Section <ref> we show how to remedy this naive approach in order to deal with these apparent shortcomings, while keeping the analytic guarantees intact. *Alternate between fill & flush Our algorithm will be alternating between two states: the fill state, and the flush state. We define an algorithm to beif its buffer is filled with known -packets. Once becoming , our algorithm switches to the flush state, during which it discards all arriving packets and continuously processes queued packets. Once the buffer empties, the algorithm returns to the fill phase. Again, in Section <ref> we show how to improve upon this naive simplified approach. §.§ A General Classify and Randomly Select MechanismWe now turn to explain the fundamentals of the classifying mechanism of our algorithm.For each packet p we assign a work-class C_i, and denote the set of potential characteristic values within class C_i by X_i.Letdenote the maximal ratio between the work values of two packets, which belong to the same work-class. Similarly, for each packet p we assign a profit-class C_i, and denote the set of potential characteristic values within class C_i by X_i. Letdenote the maximal ratio between the profits of two packets, which belong to the same profit-class. Throughout our analysis, we will useandwhich are both constants.Denote byandthe number of work-classes and profit-classes, respectively. We say a packet p is of combined-class C_(i,j)if it is of work-class C_i and of profit-class C_j. Note that in terms of work, the class to which a packet p belongs is defined statically by the total work of p, and does not depend upon its remaining processing cycles, which may change over time.Upon initialization, the algorithm selects a class by picking i^* ∈1,…, and j^* ∈1,…,, each chosen uniformly at random. Then, the selected combined-class is = C_(i^*,j^*).We will later define several ways to partition the packets into classes, each tailored and optimized for some specific scenarios of possible work and profit values. §.§ TheAlgorithm We now describe the details of our algorithm, Speculatively Admit (), depicted in Algorithm <ref>. The pseudo-code in Algorithm <ref> uses the procedures UpdatePhase(), SortBuf(), and , whose pseudo-code appears in Algorithms <ref>, <ref> and <ref>, respectively. The procedureis destined to assure a free space for a high-priority arriving packet, even at the cost of pushing-out and dropping a lower-priority packet from the tail of the buffer, if the buffer is full.Once in the arrival step, algorithmupdates its phase (line <ref>). In each cycle, the algorithm tosses a coin with some probability r, to be determined later, to decide whether this is an admittance cycle, namely, a cycle in which the algorithm may admit an unknown packet (line <ref>). If the phase is flush, the algorithm skips the while loop (lines <ref>-<ref>), thus discarding all arriving packets.If the phase is fill, which in particular implies that the buffer is not , the algorithm accepts every arriving -packet (lines <ref>-<ref>). For assuring a free slot for the arriving -packet, the algorithm calls(line <ref>) before accepting the packet (line <ref>). The if-clause in lines <ref>-<ref> handles the special case where there are already B-1 -packets in the buffer; in this special case, after accepting the arriving -packet, the buffer will become , and therefore it should stop admitting packets.If the phase is fill and this is an admittance cycle (line <ref>), the algorithm admits a single U-packet arriving in this cycle, if such a packet exists. In lines <ref>,<ref>,denotes the number of U-packets which arrive in cycle t by the arrival of packet p, including p itself. Lines <ref>-<ref> essentially perform a reservoir sampling <cit.>, which imply that the admitted U-packet is chosen uniformly at random out of all U-packets arriving in this cycle.Finally, if the buffer is not full, the algorithm greedily accepts every arriving packet (lines <ref>-<ref>).While in the processing step, the algorithm simply processes the top-priority packet in the buffer (line <ref>). Finally, the algorithm updates its phase and sorts the queued packets each time it either accepts or processes a packet (lines <ref>-<ref> and <ref>-<ref>). Note that the marking of a packet as an “admitted packet” is cycle-based, namely, once an admitted packet is processed, it is not considered “admitted” anymore. To better understand , please refer to  <ref>, showing a running example of the algorithm. §.§ Performance AnalysisWe now turn to show an upper bound on the performance of our algorithm (for W, V > 1), captured by the following theorem (see <ref> for the proof):is O ( [ M/r + ·] ··) -competitive. Theorem <ref> shows an inverse linear dependency of the competitive ratio on the probability of choosing a cycle as an admittance cycle r. Thus, the best competitive ratio is attained for r=1, i.e., every cycle where U-packets arrive should be an admittance cycle. In practical scenarios, however, one might want to be more conservative in choosing admittance cycles. E.g., one might choose r<1 so as to allow non-parsing cycles even when U-packets arrive, thus speeding up the processing of -packets. If one indeed chooses r=1, randomization should be maintained only for choosing the specific U-packet to be admitted, and the choice of the selected class. We further explore the effect of the choice of parameter r in Section <ref>.In the special cases of homogeneous work values (homogeneous profit values), we assign == 1 (== 1, resp.) in the upper bound implied by Theorem <ref>, and obtain the following corollary: [(a)]* In the special case of homogeneous work values,is O((M/r + ) ·)-competitive. * In the special case of homogeneous profit values,is O( (M/r + ) ·)-competitive.Lastly, we note that when all packets are known upon arrival, i.e. M=0,is (···)-competitive (see  <ref>). §.§ Concrete Classification MechanismsWe now show various classify and randomly select mechanisms, which are tailored and optimized for different scenarios, depending on the profit and work values. A linear classification When a characteristic consists of a small set of potential values, we let each class include a single value of this characteristic. As a result, the competitive ratio of the algorithm is linearly depended upon the number of distinct potential value of the respective characteristic. For instance, when the set of potential work values is small, we let each potential work value define a class. As a result, the competitive ratio of , implied by Theorem <ref>, is linearly depended upon the number of distinct work values, captured by the parameter . Note that in this case we have X_i = w_i, implying that , the max-to-min ratio of values within X_i, is 1. A logarithmic classification When the set of potential values of a characteristic is large, letting each value define a unique class results in a poor competitive ratio. Therefore, in such cases we use a logarithmic-scaled class partitioning as follows. We say that a packet p is of a certain class (either work- or profit-) i if its corresponding value is in the intervalX_i =[1,2] i=1[2^i-1+1, 2^i] i > 1. In particular, using the above partition packets into classes, we obtain that == 2, = log_2V and = log_2W. Using Theorem <ref>, we obtain the following corollary:is O(M/rlog_2 W log_2 V)-competitive. We note that if we know the number of distinct values for each characteristics and the values of W and V, we can choose the appropriate classification scheme and haveto be the minimum between log_2 W, and the number of distinct work values; and haveto be the minimum between log_2 V, and the number of distinct profit values. Moreover, in any of our classification schemes, , ≤ 2.§ IMPROVED ALGORITHMS Algorithmselects a single class uniformly at random so that the characteristics of packets on which it focuses, namely, -packets, differ by at most a constant factor. This gives the sense of “uniformity” of traffic within the class being targeted, which in turn reduces the variability of characteristics of packets on which the algorithm focuses. However, in practice there are various cases where the strict decisions made bycan be relaxed without harming its competitive performance guarantees. In practice, such relaxations actually allow obtaining a throughput far superior to that of . In what follows we describe such modifications, which we incorporate into our improved algorithm, , and prove that all our performance guarantees forstill hold for . *Class closure Recall the partitioning of packets into classes, described in Section <ref>, namely,C_(i,j) | i=1,…,, j=1,…,. We let the (i,j)-closure class be defined asC^*_(i,j)=⋃_i'≤ i, j'≥ j C_(i',j').This definition means that the work of any packet in C^*_(i,j) is within a ratio of at mostof the work of any packet in C_(i,j), and similarly for the profit of any packet in C^*_(i,j). Formally, for any packets p ∈ C_(i,j) andp^* ∈ C^*_(i,j), w(p^*) ≤· w(p) and v(p^*) ≥v(p)/.We letdenote the algorithm where the selected classis chosen to be C^*_(i,j), for some values of i,j chosen uniformly at random from the appropriate sets. A simple substitution argument shows that thus picking C^*_(i,j) by , instead of selecting C_(i,j) as done in , leaves the analysis detailed in Section <ref> intact. *Fill during flush (pipelining) Algorithmwas defined such that no arriving packets are ever accepted during the flush phase. This enables the partitioning of time into disjoint intervals (determined by 's buffer being empty et the end of such an interval), and applying the comparison of performance of , on the one hand, and , on the other hand, independently for each interval. In practice, however, allowing the acceptance of packets during a flush phase cannot harm the analysis, nor the actual performance, if this is done prudently: packets which arrive during the flush phase are accepted according to the same priority suggested by the algorithm's behavior in the fill phase. Furthermore, the algorithm stores in the buffer packets which arrive during the flush phase, but never schedules them for processing before it successfully transmits all B packets that were stored in the buffer when it turned . *Improved schedulingsorts the queued packets in -first order. For simplicity of presentation, we assumed in Section <ref> that within the set of -packets, as well as within the set of non--packets, packets areinternally ordered by FIFO. However, one may consider other approaches as well to performing such scheduling for each of these sets (while maintaining -first order between the sets). We consider specifically the following methods: [(i)]* FIFO,* W-then-V, which orders packets by a non-decreasing order of remaining work, and breaks ties by non-increasing order of profit, and* non-increasing order of packet effectiveness, where the effectiveness of a packet is defined as its profit-to-work ratio. We emphasize that the packet scheduled for processing during an admittance cycle remains a U-packet, which is selected uniformly at random from the arriving U-packets at this cycle. All the non-admitted U-packets, however, are located at the tail of the queue, thus representing the fact that their priority is lower than that of every known packet. By applying different scheduling regimes, we obtain different flavors of .The following Theorem shows that the performance of all flavors ofis at least as good as the performance of . is O ( [ M/r + ·] ··) -competitive. For the proof, see <ref>. We study the performance of the various flavors ofin Section <ref>.§ PRACTICAL IMPLEMENTATIONWhile presenting our basic algorithm in Section <ref>, we assumed for simplicity that the values of W and V – the maximal work and profit per packet, respectively – are known to the algorithm in advance. We now show how to relax these assumptions without harming the performance of our algorithms.We refer to an algorithm implementation that does not know these values in advance as a values-oblivious algorithm, and to an algorithm implementation that knows the values of W and V in advance as a values-aware algorithm. We will show that a values-oblivious algorithm can obtain a performance which is no worse than that of a values-aware algorithm, even if the values-aware algorithm knows not only W and V, but also the concrete classes in which packets will arrive.Our implementation of a values-oblivious algorithm is based on an application of reservoir sampling <cit.> on classes revealed during packet arrivals, as we will detail shortly. A new class is revealed either due to the arrival of a K-packet p, or due to a U-packet q being parsed, corresponding to a class previously unknown to the algorithm. We call such an event an uncovering of a new class.The values-oblivious algorithm implementation performs the following alongside all decisions made by the values-aware algorithm: Before the arrival sequence begins we initiate a counter N of known classes to be N=0. Upon the uncovering of a new class at t the algorithm increments N by one (to reflect the updated number of known classes), and replaces the previously selected class with the new class with probability 1/N.As the above procedure essentially performs a reservoir sampling on the collection of classes known to the algorithm, it essentially implements the selection of a class uniformly at random among all a posteriori known classes <cit.>.It therefore follows that the distribution of the packets corresponding to the eventual selected class (after the sequence ends) handled by the values-oblivious algorithm is identical to the distribution of the packets handled by the values-aware algorithm. Therefore the expected performance of the values-oblivious algorithm is lower bounded by the expected performance of the values-aware algorithm. We note that the implementation of the values-oblivious algorithm can be applied to any of the variants described in our previous sections.§ SIMULATION STUDY In this section we present the results of our simulation study intended to validate our theoretical results, and provide further insight into our algorithmic design. Our choice of distributions for the parameters of the traffic characteristic enables us to evaluate our algorithms performance in a wide range of settings. These choices, as we show in the sequel, are also motivated by the properties of real-world traffic. §.§ Simulation SettingsWe simulate a single queue in a gateway router which handles a bursty arrival sequence of packets with high work requirements (corresponding, e.g., to IPSec packets, requiring AES encryption/decryption) as well as packets with low work requirements (such as simple IP packets requiring merely IPv4-trie processing). Arriving packets also have arbitrary profits, modeling various QoS levels.Our traffic is generated by a Markov modulated Poisson process (MMPP) with two states, LOW and HIGH, such that the burst during the HIGH state generates an average of 10 packets per cycle, while the LOW state generates an average of only 0.5 packet per cycle. The average duration of LOW-state periods is a factor W longer than the average duration of HIGH-state periods. This is targeted at allowing some traffic arriving during the HIGH-state to be drained during the LOW-state.In our simulations, we do not deterministically bound the maximum number, M, of U-packets arriving in a cycle, but rather control the expected intensity of U-packets by letting each arriving packet be a U-packet with some probability α∈ [0,1]. We thus obtain that the expected number of U-packets per cycle during the HIGH state is 10 α.In real-life scenarios, the maximum work, W, required by a packet, is highly implementation-depended. It depends on the specific hardware, processing elements, and software modules. However, several works which investigated the required work on typical tasks <cit.> indicate that W is two orders of magnitude larger than the work required for doing an IPv4-trie search or classification of a packet. We refer to IPv4-trie search or classification of a packet as the baseline unit of work, captured by our notion of “parsing”. We therefore set the maximum work required by a packet to W=256 throughout this section. As the potential set of characteristics is large, we use a logarithmic classification scheme (recall Section <ref>).Determining the maximum profit, V, associated with a packet, is a challenging task. This value depends both on implementation details, as well as on proprietary commercial and business considerations. In order to have a diverse set of values, which model distinct QoS requirements, we set the maximum profit associated with a packet to V=16 throughout this section.The values W=256 and V = 16 imply a total of 8 · 4 = 32 potential classes for the algorithm to select from, respectively. The value of each characteristic for each packet is drawn from an approximation of a Pareto-distribution as follows. First, we randomly generate numbers, following a Pareto-distribution. Next, numbers are rounded, to get integer values. Finally, for disallowing values above the maximum (256 for work values and 16 for profit values), all the cases where the randomly generated values were above the maximum were truncated, namely, treated as if the generated value was exactly the maximal value. The averages and standard deviations of the values obtained after this generation process are 17.97 and 22.22 for packet work, and 3.66 and 3.20 for packet profit. The schematic probability distribution function of the characteristics values is depicted in Fig. <ref>. Note the spike at its maximum, due to the truncation described above. Unless stated otherwise, we assume that B=10, r = 1 and each arriving packet is a U-packet with probability α = 0.3. We thus obtain that the expected number of U-packets arriving during the HIGH state is 0.3 · 10 = 3 per cycle.As a benchmark which serves as an upper bound on the optimal performance possible, we consider a relaxation of the offline problem as a knapsack problem. Arriving packets are viewed as items, each with its size (corresponding to the packet's work) and value (corresponding to the packet's profit). The allocated knapsack size equals the number of time slots during which packets arrive. The goal is to choose a highest-value subset of items which fits within the given knapsack size. This is indeed a relaxation of the problem of maximizing throughput during the arrival sequence in the offline setting, since the knapsack problem is not restricted by any finite buffer size during the arrival sequence, nor by the arrival time of packets (e.g., it may “pack” packets even before they arrive).We employ the classic 2-approximation greedy algorithm for solving the knapsack problem <cit.>, and use its performance as an approximate upper bound on the performance of . For considering the additional profit whichmay gain from packets which reside in its buffer at the end of the arrival sequence, we simply allow the offline approximation an additional throughput of BV for free, which is an upper bound on the benefit it may achieve after the arrival sequence ends.We compare the performance of studied algorithms by evaluating their performance ratio, which is the ratio between the algorithm's performance and that of our approximate upper bound on the performance of .We compare the performance of the following algorithms: * FIFO: A simple greedy non-preemptive FIFO discipline that simply accepts packets and processes each packet until completion, regardless of its required work or value.* : Algorithm , described in Section <ref>.*FIFO: Algorithmwhere priority ties are broken by FIFO order.*W-Then-V: Algorithmwhere priority ties are broken in non-decreasing order of remaining work, and further ties are broken in non-increasing order of profit. This variant is denoted by W-V in Figures <ref>-<ref>.*EFFECT: Algorithmwhere priority ties are broken in non-increasing order of their profit-to-work ratio. We recall that all the flavors oflisted above maintain a -first order, and differ only in the internal ordering within each set (namely, within the set of -packets, as well as within the set of non--packets). All flavors ofdescribed above employ the class-closure and the fill-during-flush modifications defined in Section <ref>. For each choice of parameters we show the average of running 100 independently-generated traces of 10K packets each. In all our simulations the standard deviation was below 0.035. §.§ Simulation ResultsFigures <ref>-<ref> show the results of our simulation study. First we note thatexhibits a very low performance ratio, similar to that of a simple FIFO (which disregards packets parameters altogether). This is due to the fact thatfocuses only on a specific class, which consists of a relatively small part of the input, and it thus spends processing cycles on packets that would not be eventually transmitted.For the variants ofwe consider, in all simulations the best scheduling policy is by non-increasing effectiveness, followed by employing the W-then-V approach. FIFO scheduling, in spite of it being simple and attractive, comes in last in all scenarios. This behavior is explained by the fact that both former scheduling policies inclear the buffer more effectively once it is . The latter FIFO scheduling approach clears the buffer in an oblivious manner, and therefore doesn't free up space for new arrivals fast enough. We now turn to discuss each of the scenarios considered in our study.§.§.§ The Effect of Selected ClassOur first set of results sheds light on the effect of the class selected by an algorithm on its performance. Fig. <ref> shows the results where the selected profit-class j^* is 1, which makesallow all profits, and the choice of work-class i^* varies. The most interesting phenomena is exhibited byFIFO. Its performance is very poor if the work-class may contain packets requiring very little work. This is due to the fact that only a small fraction of the traffic requires this little work, and the algorithm scarcely arrives at being . As a consequence, the algorithm handles many low-priority packets, which are handled in FIFO order, giving rise to far-from-optimal decisions. The algorithm steadily improves up to some point, and then its performance deteriorates fast as it assigns high-priority to packets with increasingly higher processing requirements. In this case the algorithm becomestoo frequently, and allocates many processing cycles to low-effectiveness packets. The maximum performance is achieved for i^*=3, which implies that the algorithm flushes whenever its buffer is filled up with packets whose work is at most 2^i^*=8. This value suffices to allow the algorithm to prioritize a rather large portion of the arrivals (recalling the Pareto distribution governing packet work-values), while ensuring the processing toll of high-priority packet is not too large. This strikes a (somewhat static) balance between the amount of work required by a packet, and its expected potential profit. The other variants ofexhibit a gradually decreasing performance, due to their higher readiness to compromise over the required work of packets they deem as high-priority traffic.shows a similar performance deterioration, for a similar reason, when the selected work-class i^* is increased from 1 up to 6. However, when increasing i^* above 6, 's performance increases again. This improvement is explained by the fact that, due to the Pareto-distribution of the work values, the number of packets which belong to each work-class rapidly diminishes when switching to work-class indices closest to the maximum of 8; recall thatover-prioritizes only packets which belong to a single randomly selected class, i.e.,does not employ the class closure optimization (described in Section <ref>). In such a case,is coerced into processing also packets which do not belong to the selected class – namely, packets with lower work – which somewhat compensates for the poor choice of the work-class. We verified this explanation by additional simulations (not shown here), in which the work-class of packets was chosen from the uniform distribution. In such a case, where there is an abundance of packets from every possible work-class, the performance ofconsistently degrades with the increase of i^*, which implies a poorer choice of work-class.Similar phenomena are exhibited in Fig. <ref>, where we consider the effect of the profit-class j^* selected by an algorithm on its performance. In this set of simulations all work-values were allowed (i.e., the selected work-class is 8). In this scenario the performance of all algorithms improves as the selected profit-class index increases, and the algorithms are able to better restrict their focus on high profit packets as the packets receiving high-priority. We note the fact thatFIFO and regular FIFO have a matching performance in the case the selected profit-class is 1, since in this caseFIFO is identical to plain FIFO (since it simply indiscriminately accepts and processes all incoming packets in FIFO order).In subsequent results described hereafter, we fix both the work-class and the profit-class to be 3, which represents a mid-range class for both the profit and the work.§.§.§ The Effect of Missing InformationFig. <ref> illustrates the performance ratio of our algorithms as a function of the expected number of U-packets arriving during the HIGH state, where we vary the value of α from 0 to 1. This provides further insight as to the performance of each algorithm as a function of the intensity of unknown packets. We recall that for our choice of parameters, the values of α translate to having the expected number of unknown packets per cycle during the HIGH state vary from 0 to 10. As one could expect, the performance ratio ofand of all versions ofdegrades as the amount of uncertainty increases.Finally, we study the intensity of exploring unknown packets, as depicted by the choice of parameter r which determines whether a cycle is an admittance cycle or not. The results depicted in Fig. <ref> consider the case of high uncertainty, where α=1, that is, all arriving packets are unknown.Observe first the special case where r=0, which represents an extreme case, in which, although all arriving packets are unknown, our algorithms do not explore any new packets, and actually degenerate to a simple FIFO, and therefore exhibit identical performance. Increasing the admittance probability r, however, yields a steady increase in performance, albeit with diminishing returns. Similar results were obtained also when some of the packets are known, but with smaller marginal benefits. These results coincide with our analytic results, which further validate our algorithmic approach.§ CONCLUSIONS AND FUTURE WORKWe consider the problem of managing buffers where traffic has unknown characteristics, namely required processing and profits. We show lower bounds on the competitive ratio of any online algorithm for the problem. We define several algorithmic concepts targeted at such settings, and develop several algorithms that follow our suggested prescription. Our theoretical analysis shows that the competitive ratio of our algorithms is not far from the best competitive ratio any online algorithm can achieve. We validate the performance of our algorithms via simulation which further serves to elucidate our design criteria. Our work can be viewed as a first step in developing fine-grained algorithms handling scenarios of limited knowledge in networking environments for highly heterogeneous traffic.Our work gives rise to a multitude of open questions, including: [(i)]* closing the gap between our lower and upper bound for the problem,* applying our proposed approaches to other limited knowledge networking environments, and* devising additional algorithmic paradigms for handling limited knowledge in heterogeneous settings.§ REFERENCES§ PRELIMINARIESWe now define some of the notation that will be used throughout the appendix.For every cycle t and packet type α, we denote by A^α(t) the number of α-packets that arrive in cycle t. For instance, (t) ((t)) denotes the number of K-packets (U-packets) which arrive in cycle t. This notation can be combined with the work and profit values of packets. For instance, _(w,v)(t) denotes the number of U-packets with work w and profit v, which arrive in cycle t.Our proofs involve a careful analysis of the expected profit of our algorithms from packets which arrive when it is either in the fill or the flush phase. Therefore, we now turn to define the exact notion of cycles belonging to either phase. We say that an algorithm is in the flush phase in a specific cycle t if it is in the flush state at the end of the arrival step of cycle t. If it's not in the flush phase in cycle t, then we say it is in the fill phase in cycle t. Denote by P and P the sets of cycles in which our algorithm is in the fill and flush phases, respectively.For every packet type α, we denote by S_α(t) the expected profit of the algorithm from α-packets which arrive in cycle t, and by S_α=∑_tS_α(t) the overall expected profit offrom α-packets. We denote by O_α the expected profit of some optimal solution, , from α-packets. Again, these notations can be combined with previous notations. For instance, O_(t) denotes the overall expected profit offrom -packets. Furthermore, O_ denotes the expected profit offrom -packets which arrive during P.§ PROOF OF THEOREM <REF>Our proof will follow from a series of propositions. Initially, we aim to prove thatsuccessfully transmits every -packet which arrives during the fill phase, by showing that it never drops such a packet once it is accepted to the buffer. successfully transmits every -packet which arrives during the fill phase. We first note, that any -packet arriving during the fill phase (depicted by the while loop in lines <ref>-<ref>) is accepted (line <ref>).Next, we show thatnever drops a -packet which resides in its buffer. We consider all cases wheredrops a packet from its buffer, and prove that it cannot be a -packet.In line <ref>,drops an admitted packet, namely, a picked U-packet, and not a -packet.In line <ref>,performs theprocedure, which may result in dropping the last packet in the buffer. However, as this line dwells within the while loop of lines <ref>-<ref>, we know that the phase is fill, and therefore there are at most B-1 -packets in the buffer. Furthermore, if there are exactly B-1 -packets in the buffer, the if-clause in lines <ref>-<ref> assures that there is no admitted packet in the buffer. Hence, if the buffer is full, it contains at least one low-priority packet – namely, a packet which is not admitted and not a -packet. After sorting the buffer, this low-priority, non- packet, will be located in the tail of the queue and dropped. may perform theprocedure also in line <ref>, if = 1. In this case, the arriving packet p is the first U-packet arriving in this cycle – and it is not admitted yet. As a result, there is no admitted packet in the buffer. Furthermore, as this line is executed during the fill phase (the while loop of lines <ref>-<ref>), there are at most B-1 -packets in the buffer. Hence, if the buffer is full, it contains at least one low-priority, non--packet, which is the packet dropped. The following lemma shows that the overall number of -packets transmitted byis at least a significant fraction of the number of -packets accepted by an optimal policy during a fill phase. S_≥r/M O_.Let t denote a cycle in the fill phase, in which U-packets arrive. Then, with probability radmits one U-packet, denoted p. As the algorithm implements reservoir sampling <cit.>, p is picked uniformly at random out of at most M unknown arrivals, and therefore the probability that p ∈ is at least A_(t)/M. As p is parsed in the cycle of arrival, in the subsequent cycle it is known. By Proposition <ref>, if p is a -packet, thenwill eventually transmit p. Recalling that X_i^* and X_j^* denote the ranges of the work and profit values within the selected work and profit class C_(i^*,j^*) (see Sec. <ref>), we conclude thatS_ (t) ≥r/M∑_ w ∈ X_i^*, v ∈ X_j^* [v · A_(w,v)(t)]. Summing Eq. <ref> over all the cycles within the fill phase, S_≥r/M∑_t ∈ P∑_ w ∈ X_i^*, v ∈ X_j^* [v · A_(w,v)(t)] ≥r/M O_. In addition, by Proposition <ref>, S_≥ O_. ThereforeS_ = S_ + S_≥r/M (O_ + O_) = r/M O_.We are now in a position to prove Theorem <ref>. Every class C_(i,j) is the selected class with probability 1/·. Using Lemma <ref> we therefore have for all i ∈1,2, …, and j ∈1,2, …,, S_(i,j)≥r/M ·· O_(i,j).Summing over all the classes, we obtain that the expected performance of our algorithm satisfies∑_i=1^∑_j=1^ S_(i,j)≥r/M ··∑_i=1^∑_j=1^ O_(i,j). Ifis neverduring an arrival sequence, then O_(i,j) = O_(i,j) and therefore, by Eq. <ref> the ratio between the performance ofand the expected throughput ofis at most M/r··, as required.Assume next thatbecomesduring an input sequence. In such a case we compare the overall throughput due to packets transmitted byuntil the first cycle in which its buffer is empty again, and the profit obtained bydue to packets accepted byduring the same interval. We note that our analysis would also apply to subsequent such intervals, namely, until the subsequent cycle in whichis empty again.We note that in casebecomes ,holds in its buffer exactly B -packets, and all these packets are transmitted by the timeis empty again. By the definition ofin Section <ref>, the maximal work whichdedicates to any of these packets is at mosttimes higher than the minimal work whichdedicates to any -packet. As a result, during the flush phase, in whichhandles B -packets,can handle at most B + B -packets. Furthermore, by the definition ofin Section <ref>, the maximal profit offrom any -packet is at mosthigher than the minimal profit offrom any -packet. Combining the above reasoning implies that O_/S_≤ B + B/B· = ( + 1). As every class C_(i,j) is the selected class w.p. 1/·, we have∀ i ∈1 …, j ∈1 …, S_(i,j)≥1/( + 1)·· O_(i,j). Summing over all the classes we obtain∑_i=1^∑_j=1^ S_(i,j)≥1/( + 1)··∑_i=1^∑_j=1^ O_(i,j). Combining Equations <ref> and <ref> implies that the competitive ratio ofis at most∑_i=1^∑_j=1^[ O_(i,j) + O_(i,j)]/∑_i=1^∑_j=1^ S_(i,j)≤[ M/r + ( + 1)] ··,which completes the proof. In the special case where all packets are known upon arrival, we obtain the following upper bound on the competitive ratio of :When M=0,is O(···)-competitive.We follow the proof of Theorem <ref>, and carefully check the required changes.When all packets are known, Proposition <ref> remains essentially intact. Furthermore, we have S_ = S_≥ O_ = O_, which replaces Lemma <ref>. Accordingly, Eq. <ref> is modified to∑_i=1^∑_j=1^ S_(i,j)≥1/·∑_i=1^∑_j=1^ O_(i,j). Equation <ref> remains intact, as in deriving it we use the classify and randomly select scheme, independently of M. Combining Equations <ref> and <ref> implies that when all packets are known, the competitive ratio ofis at most∑_i=1^∑_j=1^[O_(i,j) + O_(i,j)]/∑_i=1^∑_j=1^ S_(i,j)≤[1 + ( + 1)] ··,which completes the proof. § PROOF OF THEOREM <REF> We first consider the effect of uniformly at random selecting a class closure, instead of selecting a specific class. First, note that the proof of Lemma <ref> also directly applies to , implying that S^*_^*≥r/M O_. Furthermore, the arguments used in the proof of Theorem <ref> also apply to , and in particularsatisfies Equation <ref>, where we substitute in the denominator S_(i,j) by S^*_(i,j). Consider next the affect of performing fill during flush. Inwe accept packets also during the flush phase, but we never process any of these packets before all packets contributing to the algorithm beingare transmitted, i.e., they are never processed before the flush phase is complete.We enumerate the fill phases and the subsequent flush phases as follows: P__1, P__1, P__2, P__2, …, P__n, P__n, where n ≥ 1. It should be noted that each such phase corresponds to a series of disjoint time intervals defined by the first cycle of the sequence of phases. We further denote the P__0 phase as an empty set of cycles, and incase that the sequence ends by a fill phase, we also let P__n denote an empty set of cycles.Similarly, we further define P^*__i, P^*__i, for the appropriate values of i, to denote the fill and flush phases corresponding to . Denote the profit accrued byandfrom packets which arrive during the i^th fill phase by S^(P__i) and O^(P__i) respectively. Similarly, denote the profit ofandobtained from packets which arrive during the i^th flush phase by S^(P__i) and O^(P__i), respectively. Similarly, we let S^* (P^*__i) and S^* (P^*__i) indicate the profit ofobtained from packets which arrive during its i^th fill and flush phase, respectively.Using this notation, we recall that, by the analysis ofpresented in Theorem <ref>O^(P__i) + O^(P__i)≤[ M/r + ( + 1) ] ·· S^(P__i)for every i=1,…,n.This induces an implicit mapping ϕ of the units of profit obtained from -packets accepted byduring P__i∪ P__i to the units of profit obtained from -packets accepted byduring P__i (either known, or unknown that were parsed), such that every unit of profit obtained byhas at most [M/r + ( + 1) ] · units of profit mapped to it.A key observation is noting that the image of mapping ϕ is essentially the profit attained from the set of -packets contributing to the algorithm beingat the end of the corresponding fill phase.Asmay accept packets during flush, in the beginning of the subsequent fill phase the buffer ofmay not be empty. In particular, there could be -packets accepted during the recent flush phase that are stored in the buffer. However, none of these packets have anypackets mapped to them. It follows that these packets can contribute tobecomingin the new fill phase, and any profit implicitly mapped to the profit of these packets by ϕ would correspond to packets arriving during the new fill phase, or its subsequent flush phase. The implicit mapping is depicted in Fig. <ref>, along with the difference between the mapping arising from the behavior of(visualized above the time axis), and the mapping arising from the behavior of (visualized below the time axis). Note that the fill and flush phases of both algorithms need not be synchronized, sincecan potentially become“faster” than .It follows that Eq. <ref> now translates to[O^(P^*__i) + O^(P^*__i)≤; [ M/r + ( + 1) ] ··[S^* (P^* __i-1) + S^* (P^*__i)] ]for every i=1,…,n. Summing over all i=1,…,n, we obtain that the competitive ratio guarantee foris the same as that for .Lastly, the analysis ofdoes not assume any specific scheduling rule to be applied, as long as the -first order rule is maintained. Thus, our competitive ratio guarantee is independent of the specific ordering within the set of -packets, as well as within the set of non--packets. § RUNNING EXAMPLE OFFigure <ref> exemplifies a running ofequipped with a 3-slots buffer. Each packet is represented by a square. If it is a known (w, v)-packet, then (w,v) (namely, its work, profit values, resp.) appears within the square representing the packet. If the packet is unknown, however, the (unknown) work and profit values do not appear, and the packet's color is dark gray.Known packets which belong to the selected class (-packets) are marked in light gray. The figure assumes that the (randomly-chosen) selected class is the class of packets with work- and profit- values within the range [3, 4]. Recall, that this range refers to the characteristics of a packet upon arrival. For instance, a (3, 3)-packet always belongs to the selected class, although after being processed its residual work decreases, and it becomes a (2, 3)-packet, and later a (1, 3)-packet, and so on.Each cycle begins with the transmission step, in which a fully processed packet, if such exists, leaves the queue. In our example there is no packet transmitted since we focus our attention on handling arrivals and determining priorities which are the core components of our algorithm. This step is followed by the arrival step, where arriving packets are handled by the algorithm. Finally, the cycle ends with a processing step, where the head-of-line (HoL) packet is processed. This packet is emphasized by an extra internal square. The state of the queue at the end of each cycle is depicted by a light-gray background. At each cycle the algorithm tosses a coin, and assigns the cycle as an admittance cycle w.p. r. In this example, we assume that cycles 1,3,5, 6 are admittance cycles. We now turn to explain the scenario depicted in Figure <ref> cycle by cycle.t=0 Begin with an empty buffer.t=1 A known (4,4)-packet arrives. As both its work- and profit- values belong to the ranges [3,4], it is a -packet, and therefore it is retained by the algorithm (recall that -packets are never dropped during the fill phase, as shown in Proposition <ref>).Next, a U-packet arrives. As this is an admittance cycle, this U-packet is admitted, that is, accepted into the buffer, and assigned to the HoL. Since this is the last packet to arrive in this cycle, and being the HoL-packet, this packet is processed in the processing step. We refer to this packet as being parsed in this cycle, as this is the first processing cycle of this packet.After parsing, the characteristics of the HoL packet become known: it is now a known (1,8)-packet. Namely, when it arrived, it was a (2, 8)-packet which has received one cycle of processing. By these values, this packet does not belong to the selected class. Therefore, it is pushed down to the buffer's tail. Instead, the -packet, with values (4,4) is assigned to be the HoL packet. It should be noted that although the parsed (1,8)-packet is superior to any -packet currently in the buffer (since it carries a profit value of 8 while requiring just one more cycle of work),still prefers -packets over this packet. We note that the improvedalgorithm would re-assign such a packet to be a -packet by considering the selective class closure. t=2 No packets arrive. The HoL-packet, (4,4), is processed, and becomes a (3,4)-packet. t=3 This is an admittance cycle. Therefore, the first arriving U-packet is admitted. In particular, this cycle well exemplifies the buffer's ordering: at top-priority is the admitted packet; at a second priority is the -packet, (3,4); the remaining packet in the buffer, (1,8), is of a lowest priority.When a second U-packet arrives,tosses a coin, and replaces the previously-admitted packet with the new arriving U-packet w.p. 1/2. When a third U-packet arrives,tosses a coin again, and replaces the previously-admitted packet with the new arriving U-packet w.p. 1/3.In the processing step,parses the admitted packet, unraveling it as a (3,3)-packet. Namely, upon arrival its characteristics were (4,3), ascribing it to the selected class. As there already exists another -packet in the buffer (the (3,4)-packet)breaks the tie between the two -packets in its buffer by FIFO order. We note that the improvedalgorithm would transition to the flush phase at this point, since it would have been full of -packets.t=4 First, we have an arriving known (2,5)-packet. By its characteristics, it is not a -packet. Therefore, it is assigned the lowest priority. In particular, as the buffer is full, this packet is discarded. Next, a U-packet arrives. However, as this is a non-admittance cycle, the U-packet is discarded as well. Finally, during the processing step, the HoL packet is processed, decreasing its remaining work to 2. t=5 We have a single arriving U-packet. As it is an admitted cycle, this U-packet is admitted, hence, accepted and parsed. In order to make room for this admitted packet, the (1,8)-packet in the tail is pushed-out and dropped.After parsing, the U-packet is uncovered as a (1, 2)-packet. Namely, upon arrival it was a (2,2)-packet. By these characteristics, this packet does not belong to the selected class, and therefore has the lowest priority, and downgraded to the tail. t=6 This is an admittance cycle. Therefore the first arriving U-packet is admitted, pushing-out from the buffer the (1,2)-packet, which was in the tail. When a second U-packet arrives, it replaces the previously-admitted packet w.p. 1/2. Then, a (2,7)-packet arrives. By its characteristics, it is neither an admitted packet (as it is a K-packet), nor does it belong to the selected class. As a result, the (2,7)-packet is assigned the lowest priority, and is therefore discarded. The last arrival in this cycle is a known (4,4)-packet. By its characteristics, it is a -packet. Since the buffer already contains B-1=2 -packets, the U-packet at the HoL is dropped, and the newly arriving CsK-packet is accepted to the queue (see lines <ref>-<ref> in Algorithm <ref>). The queue therefore becomes , i.e., the buffer is full with -packets.then switches to the flush state, and it will merely process all the packets in its buffer in a run-to-completion manner and transmit all the fully-processed packets, until the buffer is empty again.
http://arxiv.org/abs/1706.08484v3
{ "authors": [ "Itamar Cohen", "Gabriel Scalosub" ], "categories": [ "cs.NI" ], "primary_category": "cs.NI", "published": "20170626171226", "title": "Queueing in the Mist: Buffering and Scheduling with Limited Knowledge" }
unsrt#1#21=#1#2#1 1=1 1 by #1 2=1 2 by #1 1=height1 depth21 1=1 1 by .4pt 1=1 2 by .4pt 1=2 1#1_#2^#30pt#1_#2^#3 K C RN=msbm10 =msbm7 =msbm5 ===#1#1 Local contractivity of the Φ_4^4 mapping byMarietta ManolessouEISTI-Department of Mathematics January 16, 2017 ======================================================================We showthe existence and uniqueness of asolution to aΦ_4^4non linear renormalized system of equations of motion in Euclidean space. Thissystem representsa non trivial model which describes the dynamics of the Φ_4^4 Green's functions in the Axiomatic Quantum Field Theory (AQFT) framework.The mainargument is the local contractivity of the so called “new mapping”in the neighborhoodof aparticular “tree type” sequence of Green's functions. This neighborhood (and the Φ_4^4 non trivial solution) belongs to a particular subset of the appropriate Banach space characterized by signs, splitting (analogous to that of the Φ_0^4 solution), axiomatic analyticity properties and “good” asymptotic behavior with respect to the four-dimensional euclidean external momenta. §INTRODUCTIONarabic §.§ A new non perturbative method Several years ago we initiated a program forthe construction of a nontrivialΦ^4_4model consistent with the general principles of a Wightman Quantum FieldTheory (Q.F.T.) <cit.>. In references <cit.> weintroduceda non perturbative method for theconstruction of a nontrivial solution ofthe systemof the Φ^4 equations of motion for the Green's functions, in the Euclidean spaceof zero, one and two dimensions.In references <cit.> we tried to apply an extension of this method to the caseof a four (and a fortiori of a three)-dimensional Euclidean momentum space throughthe proof of a global contractivity principle inside an appropriate Banach space. However that method wasrather complicated and also could profit from more precise norm definitions.Using a Banach space “analogous” to that of zero dimensions and the local contractivity of the mapping (renormalized equations of motion in four -dimensional problem) in a neighbourhoud of the zero dimensional solution, we will be able to present aneasily convincing proof of the Φ^4_4 nontriviality.This method is different in approach from the workdone in the Constructive Q.F.T. framework ofGlimm-Jaffe and others <cit.> <cit.>, and the methodsof Symanzik who created the basis for a pureEuclidean approach to Q.F.T. <cit.>.In the Q.F.T. language the interaction offour scalar fields Φ(x) is represented by a Lagrangian of L_I∼ΛΦ^4 type, for example<cit.>: L∼ΛΦ^4 -∂_μΦ∂^μΦ - m^2Φ^2 It ismathematically describedin the four-dimensional Minkowski space with coordinates:x={x⃗∈ R ^3, x_0∈ R;||x||=√( [x_0^2 -x⃗ ^2 ])},by the following two-fold set of dynamical equations: i. a nonlinear differential equation (the equation of motion) resulting from the corresponding Lagrangian by application ofthe variational principle:- (1pt^ + m^2 ) Φ(x) =Λ/ρ +γ :Φ^3(x): - aΦ(x)ii. the “conditions of quantization”of the field Φ(x) expressed by the equal time commutation relations:[Φ(x), Φ(y)]=[Φ̇(x), Φ̇(y)] = 0, x_0=y_0 [Φ(x),Φ̇(y)]=iγρ +γδ^3(x⃗ - y⃗),x_0=y_0 Here m>0and Λ>0 are the physical massand coupling constant of the interaction model, and a, ρ, γ, are physicallywell defined quantities associated to this model, the so-called renormalization constants.For the precise definition of the latterand of the normal product:Φ^3(x) : we refer the readerto references <cit.>. From these equations one can formally derive an equivalent infinite system of nonlinear integralequations of motion for the Green's functions (the “vacuum expectation values”) of the theory,analogous but not identical, to the Dyson -Schwinger equations <cit.><cit.>. This dynamic system has been established in 4 dimensions by using the Renormalized Normal Product of <cit.>. §.§ The dynamic system of the Green's functionsThe infinite system of equations for the Schwingerfunctions (i.e. the connected, completelyamputated with respect to the free propagator Green's functions, H ={H^n+1}_n =2k+1, k ∈ N),in the Euclidean 4-dimensionalmomentum space, E^4n_(q) derived from the system <ref>, <ref>, <ref>, <ref>has the following form:[ H^2 (q,Λ)= - Λγ+ρ{ N_3H^4-a H^2 (q,Λ)Δ_F(q) } +(q^2+m^2)γγ+ρ;∀ n ≥3, (q,Λ) ∈ E^4n_(q)×^+; H^n+1(q,Λ) =1γ+ρ{A^n+1+ B^n+1+ C^n+1(q,Λ) + Λ a H^n+1(q,Λ) }; ;A^n+1(q,Λ) = - Λ N^(n+2)_3H^n+3(q,Λ);; ; B^n+1(q,Λ) = - 3Λ∑_ϖ_n(J) N^(j_2)_2H^j_2+2N^(j_1)_1H^j_1+1(q,Λ);; ; C^n+1(q,Λ) = - 6Λ∑_ϖ_n(I)∏_l=1,2,3 N^(i_l)_1H^i_l+1 (q_i_l,Λ);; a(Λ)=[N_3H^4]_q^2+m^2=0; ρ(Λ)=-Λ[ ∂^2/∂ q^2[N_3^(3)H^4]]_q^2+m^2=0;; ;γ(Λ)= [-6Λ∏_l=1,2,3H^2(q_l)Δ_F(q_l)/H^4(q)]_q=0 ]* Here the notations:N^(n+2)_3H^n+3,N^(j_2)_2H^j_2+2N^(j_1)_1H^j_1+1 ∏_l=1,2,3 N^(i_l)_1H^i_l+1represent the so called “Φ^4_4 operations” that we introduce in the Renormalized G-ConvolutionProduct (R.G.C.P)context of the references<cit.>, <cit.>, <cit.>. Briefly, the twoloopΦ^4_4 - operation is defined by: N^(n+2)_3H^n+3 = ∫ R^(3)_GH^n+3∏_i=1,2,3Δ_F(l_i)d^4k_1d^4k_2with R^(3)_G being the corresponding renormalization operatorfor the two loops graph with bubble vertex the H^n+3 Green's function.The analogous expression for the one loopΦ^4_4 -operation is the following:[N^(j_2)_2H^j_2+2][ N^(j_1)_1H^j_1+1] = H^j_1+1Δ_F∫ R^(2)_G [H^j_2+2∏_i=1,2Δ_F(l_i)]d^4kwith R^(2)_G, the corresponding renormalizationoperator for thetwo loops graph.The notation Δ_F indicates the free propagator, and the Φ^4_4 operation N^(j_1)_1 is exactly the multiplication (“trivial convolution”)bythe correspondingfree propagator Δ_F = 1 (q^2+m^2).Here q^2means theEuclidean norm of the vector q∈ E^4n_(q). *In equations <ref> the notation q in the arguments of the two-point and four-point Green's functions is used indifferently for q∈ E^12_(q) or q∈ E^4_(q). * In the previous notationsand in all that follows, N always means the setof non negative integers and n will always bean odd positiveinteger.§.§ The “primary Φ-Iteration” §.§.§ The fixed point method The method is based on the proof of the existence and uniqueness of the solution of the above infinite system of dynamical equationsof motion verified by the Schwinger functions, following a fixed point theorem argument. The information concerning the special featuresof the dynamics of four interacting fieldshas been obtained through an iteration at fixed coupling constant and at zero external momenta of theseintegral equations of motion in the two dimensionalcase, taking the free solution as starting point.This is what was called the “Φ-Iteration”in <cit.>.The exploration of the detailedorganization of the different structural global termsof the H^n+1_ν functions at every order νof what we call now the“primary Φ-Iteration”,has brought forth particular properties as: * (a) alternating signsand splitting(or factorization)properties at zero external momenta:H^n+1(q=0,Λ)=- n(n-1)δ_n(Λ) H^n-1(q=0,Λ) [H^2(q=0,Λ)]^2with {δ_n} a bounded increasing sequence of continuous functions of Λ and uniformly convergent to some finite positive constant δ_∞.* (b)bounds at zero external momentawhich in turn yieldglobal boundsof the generalform:|H^n+1(q,Λ)| ≤n ! K^n These features formed a self-consistentsystem of conditions conserved by the “primaryΦ-Iteration”.In particular they impliedprecise“norms”of the sequences of the Green's functions H^n+1:H= sup_q, n, Λ{ M_n^-1(q, Λ)| H^n+1(q,Λ)| }These norms in turn were conserved and automatically ensuredthe convergence of this “primary Φ-Iteration” to the solution.So,in references <cit.> and <cit.> we thought about obtainingan answer to the problem by firstdefining a Banach space B_initialusing the norms provided by the “primitive Φ-Iteration”.and seeked a fixed point ofthe equations of motion inside a characteristicsubset Φ⊂ B_initial which exactly imitatedthe fine structure of the Φ-Iteration.Now, taking into accountthe divergence of the perturbative series ofa Φ^4_2 model, a resultthat A. Jaffe <cit.> established several years ago, another (two-part)question immediately arises. Does the two-dimensional“Φ-Iteration” generate theperturbative series exactly? If yes, then is there any contradiction between the divergence of theperturbative expansion and theconvergence of theΦ-Iteration? The answer is that the Φ-Iterationhas nothing to do with the perturbation series. The method is not areconstruction of perturbationtheory. More precisely, all terms of the N^th orderof perturbationseries are included togetherwith subsequent ones at every order Nof the Φ-Iteration(cf.<cit.>a,b,). In otherwordsthe series like that of perturbation can be divergent, buta series of polynomials of theterms of the former may still be convergent.The difference between thetwo approaches comes from thedifferent way in which the polynomials in Λ are arrangedand summed up in each of the twoapproximations. So automatically there is no contradiction if the“Φ-Iteration” convergesto a nontrivial solution despite the divergence ofperturbation theory. The reasons that motivated us for a study in smaller dimensions andnot directly in four, werethe absence of the difficulties due to the renormalization in twodimensions, and the purecombinatorial character of the problem in zero dimensions <cit.>. Another useful aspectof the zero dimensional case is the fact thatit provides a direct way tonumerically test the validity of the method <cit.>, <cit.>. §.§.§ The new mapping M^* and the local contractivityThese “conserved norms”<ref> lead to the convergence of the “PrimaryΦ -Iteration”. So, byintroducing an appropriate Banach space B_initial defined bythese norms and a characteristic subset Φ⊂ B_initialwhich exactly imitates the fine structure of the“Primary Φ-Iteration”one expects to establishby a fixed point theorem the existence of a unique nontrivial solutioninside this subset.Unfortunately this is not the case <cit.>. Theglobal terms A^n+1,B^n+1, and C^n+1,(tree terms) (with alternating signs)have identicalasymptotic behavior with respect to n, but not what we schould expect of the corresponding H^n+1.More precisely, at every fixed value of the external momenta (precisely we proved them at zero external momenta) , we obtain: A^n+1∼_n→∞^ (- δ_∞)^n-12 n !n^2 B^n+1∼_n→∞^-(- δ_∞^)^n-12n !n^2C^n+1∼_n→∞^ (- δ_∞^Λ)^n-12 n !n^2As far as the behavior with respect to the external four momentais concerned they followthe behavior of the norm functions<ref> (i.e. the corresponding structure of the Banach space).But,the above n^2 dependence of the global terms prevents the mappingM: B_initial M⟶ B_initialfrom being contractive in B_initial, despitethe convergence ofthe“Φ-Iteration”, (thanksto the alternating signs of the globalterms). This is the reason that motivated us to definea newmapping M^* (equivalent to the initialmapping) givenby the following equations, and which is contractive:H^n+1'(q,Λ) = δ_n^'(q,Λ)C^n+1'(q, Λ)3Λn (n-1)with:δ_n^'(q,Λ)=3Λ n(n-1)/(γ+ρ)+D_n(H)-ΛαandD_n(H)=|B^n+1| - |A^n+1||H^n+1|One can intuitively understand the contractivity of the new mapping M^*by looking at thebehavior with respect to nof the function D_n(at fixed external momenta). Precisely,D_n(H ) ∼_n→∞^ n^2 n ! (δ_∞)^n-32/n ! (δ_∞)^n-12∼_n→∞^n^2/δ_∞Consequently:δ_n^'(q,Λ) ∼_n→∞^ δ_∞n (n-1)/ n^2∼_n→∞^δ_∞By this last argument one can show not only the conservation of the norms but also the contractivityofthe “new mapping” M^* to a fixed pointinside a characteristic subset Φ⊂ B_initial,under a sufficient condition of the following type imposed on the renormalized coupling constant:0≤Λ≤Λ_0 In an equivalent way, this result implies the existence and uniqueness of a nontrivial solution (even in four dimensions), of the system. Under the condition <ref>,this solution lies in a neighborhood of a precise point-sequence of the appropriate subset Φ⊂ B, the so-called fundamental sequence {H_T0}.Consequently, the construction of this non perturbative solution can be obtained by the iterationof the mapping M^* inside Φ⊂ Bstarting from the corresponding, to every dimension, fundamental sequence {H_T0}.So, this solution verifies automatically, the“alternating signs” and “splitting” propertiesat every value of the external momenta, analyticity propertiestogether with the physical conditions imposed on H^2- and H^4 - Green's functions for the definition of the renormalization parameters.The essentialresults of the method in 4 dimensions are the following:* The proof of the“alternating signs” and “splitting”properties at every value of the external momenta (and not only at zero external momenta as we had originallyestablished in the “primary Φ-Iteration”). *The proof of specific asymptotic increase properties withrespect to the four dimensional external momenta of the renormalized Green's functions, in particular the powers of q^2 log(q^2)^kwhich dominate the H^2-point function's behaviour. *The proof of A.Q.F.T. properties in complex Minkowski space. *The proof of the existence and uniqueness of solution of the system <ref> obtained as a limit of a (new)renormalizediteration procedure (the so called“Φ^4_4- iteration") or in other words as a fixed point of a locally contractive mapping in the neighborhood of a precise “tree type”sequence.§.§ Planning of the paperThe paper is organized as follows: *In section 2 we introduce the general vector space B, the tree type sequences, and the Renormalized Φ Convolutions (R.Φ.C.) - Green's functions using as building block one of the tree type sequences the“fundamental tree type sequence” H_T0. (cf.def. <ref> )Then we define the “renormalized subspace” B_R⊂ B which (being providedwith the appropriate norm) is a Banachspace.* In section 3*We introduce aparticular subset Φ_R⊂ B_Rcharacterized by the detailed bounds, signs, splitting properties, consistent definitions of the renormalization parameters a, ρ,γ, together with the analyticity properties ofgeneral n-point functions (in the A.Q.F.T. framework).The non triviality of Φ_R⊂ B_R is established by proving that H_T0∈Φ_R. *We define the Generalized Renormalized Φ- Convolutions (G.R.Φ.C.) and the new mapping M^* on B_R.* By successive application of M^* on H_T0a Φ_4^4- iteration is defined which preserves the good properties ofH_T0 and automatically establishes the stability of a neighbourhood of Φ_R under M^*.* Section4 containsthe construction ofthe solution: the local contractivity of the mapping M^* inside a precise closed ball in B_R,S(H_T0,r_0)⊂Φ_R. *In the Appendices we present the necessary proofsof the theorems stated in sections 3 and 4. § THE VECTOR SPACE B - THE“SPLITTING SEQUENCES” THE “TREE TYPE SEQUENCES” AND THE BANACH SPACE B_R The fundamental difference between two and four (or three)dimensions is the divergence in the latter case of a finite number ofΦ-Convolutionsfor everyfixed integer n =2k +1, k∈. Therefore, it has been necessary to introduce the precise definition of the renormalization operations and choose a more specific space of Green's functions sequences in theA.Q.F.T.framework. To this purpose, we introduce the basic elements of this space,the so-called tree type sequences H_T, and from a particular choice of one of them, werecursively define the Renormalized Φ-Convolutions (R.F.C).For the detailed definitions and statements concerning the recursiveprocedure of the renormalization, we referthe readerto the references <cit.> <cit.><cit.>. Apartfrom a brief reminderof certain crucial properties, we apply the results of these papers without any detailed description.§.§ The space B - the“splitting sequences”The space BWe define the vector space B of the sequences H={ H^n+1} _n =2k+1;k∈ as follows: For every n the function H^n+1 belongs to the spaceC^∞( E^4n_(q)×^+)of continuouslydifferentiable real numerical functions of the set (q,Λ) of 4n+1 real independent variables and verifies the followingproperties. There exists a finite positive constant C_n, such that the followingbounds hold: [ ∀(q,Λ) ∈ ( E^4_(q)×^+); | H^2(q,Λ)|≤ C_1 [(‖ q‖^2+m^2)^(1+π^2/18)] ] and ∀n=2k+1k∈^*, (q,Λ)∈ ( E^4n_(q)×^+) | H^n+1(q,Λ)|≤ n!C_n [(‖ q‖^2+m^2)^(1+π^2/18)]^n Here the notation‖ q‖=√(∑_i=1^nq_i^2) means the Euclidean norm of the vectorq∈ E^4n_(q). In the following weoften use theequivalent notation: ‖ q‖^2=q^2.The "splitting sequences" δ∈ℬ We introduce the particular class 𝒟⊂ℬ of the so-called “splitting sequences”δ={δ_n(q,Λ)}_n=2k+1 k∈∈ℬ.There exists afinite positive constant C_0 such thatthe corresponding bounds (<ref>) take the following form: [ ∀n=2k+1 k∈^*; 0 < δ_n(q,Λ)≤ C_0 ∀ (q,Λ)∈ E^4n_(q)×^+. ] §.§ The "tree type sequences" H_T We define the following class of sequences H_T associated with a given splitting sequenceδ∈𝒟:[ ∀ (q, Λ) ∈ ( E^4n×^+*); ; H_T^2(q,Λ)= (q^2+m^2) ( b_0(Λ)+ b_1(Λ)(q^2+m^2)^(1+π^2/18); ; H_T^4(q,Λ)= - δ_3(q,Λ) ∏_j=1,2,3H_T^2 (q_j,Λ)Δ_F(q_j);∀n≥ 5; H_T^n+1(q,Λ)=-δ_n(q,Λ)/n(n-1)∑_ϖ_n(I)∏_j=1,2,3H_T^i_j+1 (q_i_j,Λ)Δ_F(q_i_j) ] We call these sequences H_T (resp. every H_T^n+1) the"tree type sequences" (res. the tree functions H_T^n+1). For every n≥ 5 the graphical representation of every H_T^n+1is a finite sum of “tree graphs”, with a four point bubble vertexassociatedwith the correspondingδ_n(q,Λ) connected by three simple free propagators Δ_F(q_i_j) to three “bubble vertices”. These bubbles represent each one of the three tree functions H_T^i_j+1 of the corresponding partition in the sum(cf.fig.<ref>).§.§.§ Particular splitting sequences-Reminders In<cit.> we introduced the following particular splitting sequences for the zero dimensional problem. *The upper and lower bounds - splitting sequences[ δ_3, max(Λ) =6Λ/γ_0+ρ_0+Λ|a_0| +6d_0;δ_3, min(Λ) = 6Λ 1+9Λ(1+6Λ^2);∀ n ≥ 5; δ_n, max(Λ) = 3Λn(n-1)/γ_0+ρ_0+Λ|a_0| +n(n-1)d_0; ; γ_0=1, a_0 = -δ_3,min[N_3]̃_q^2+m^2=0;ρ_0= Λδ_3,min∂/∂ q^2[N_3]̃_q^2+m^2=0; ; δ_n, min(Λ ) =3Λ n(n-1)γ_max+ρ_max+Λ|a_max| +3Λn (n-1); ; γ_max=1+9Λ (1+6Λ^2),ρ_max=6Λ^2 ∂/∂ q^2[N_3]̃_q^2+m^2=0; |a_max|=6Λ[N_3]̃_q^2+m^2=0;]*Notice thatfor the constant d_0 appearing in the definition of δ_n, max we put d_0=3Λ 10^-2 that is precisely the value we determined and usedin<cit.> for the zero dimensional case. *For the δ_n,min'sin 4-dimensions we need to use the maximal values of the renormalization constants obtained directly by the definitions <ref>. * Concerning the δ_n,max's in4-dimensions we need to use the minimal values of the renormalization constants γ_0, ρ_0, and a_0 (cf. def.<ref>).* The solution of the zero dimensional mapping M^*_0. In <cit.> we proved the existence and uniqueness of the splitting sequence {δ_n0}-solution of the zero dimensional mappingℳ^*_0 defined as follows: M^*: Φ_0M^*⟶ B_0:[H^2' (Λ) = 1-Λ H^4 (Λ); H^4' (Λ) = - δ_3'(Λ)[H^2']^3withδ_3^'(Λ)= 6Λ/1+ D_3; D_3=6Λ H^2(3/2- |H^6|/6|H^4||H^2|);;; ∀n≥ 5H^n+1' (Λ) = δ_n^'(Λ)C^n+1'3Λ n(n-1); δ_n^'(Λ)=3Λ n(n-1)/1+D_n(H); D_n(H)=|B^n+1| - |A^n+1||H^n+1| ]§.§.§ The “fundamental tree type sequence” H_T0For further purposes, we introduce the particular treetype sequence H_T0 thatwe shall call “fundamental” defined as follows:[ γ_0=1;a_0 = -δ_3,min[N_3]̃_(q^2+m^2) =0;;; ρ_0= Λδ_3,min∂/∂ q^2[N_3]̃_(q^2+m^2) =0;;;∀(q,Λ)∈(ℰ_(q)^4×^+*); H_T0^2 = (q^2+m^2)(1+δ_10(q,Λ)Δ_F ); δ_10(q,Λ)Δ_F=-ρ_0+ Λδ_3,min([N_3]̃ - [N_3]̃_(q^2+m^2) =0)Δ_F /1+ρ_0; ∀(q,Λ)∈(ℰ_(q)^12×^+*)H_T0^4 = -δ_3,min(Λ)∏_l=1,2,3H^2_T0(q_l)Δ_F(q_l);;; H^n+1_T0(q,Λ) = δ_n, min(Λ)C^n+1 _T0(q,Λ)3Λn (n-1);; :; C^n+1_T0(Λ) = - 6Λ∑_ϖ_n(I)n ! i_1!i_2!i_3!σ_sym(I)∏_l=1,2,3H^i_l+1_T0Δ_F(q_i_l); ] §.§ The Renormalized Φ-ConvolutionsUsing the previously defined fundamental tree type sequence H_T0 we recurrently constructthe infinite family of the so-calledRenormalized Φ-convolutions as follows:We successively apply in an arbitrary way the Φ^4_4-operations (defining the mappingℳ bydefinition <ref>);(cf. example fig.<ref> andfig <ref>).At a certain order ν of this iteration we consider the corresponding result from an arbitrary H_T0^ n+1 tree function. Graphically it is the sum of treetype functions with“bubbles”, the corresponding images coming from successive applications of the Φ^4_4 operations on the vertices of every tree contribution of the original H_T0^ n+1 tree function. We denote by Φ_n^n̅(ν)(q,Λ)such a bubble (with n̅(ν)+1 external lines), and call it the Renormalized Φ-Convolution(R.Φ.C) associated with the H_T0^ n+1 tree-type function.EveryΦ_n^n̅(ν)(q,Λ), depends on the setq=(q_1,⋯, q_n̅(ν)) (q∈ℰ_(q)^4n̅(ν)) of the (remaining after the integrations)external independentmomenta. It constitutes a candidate bubble vertex for new tree type sequence in B(cf. figures<ref>and <ref>). Using the general prescription of renormalization of <cit.> we introduce therenormalization operator at every step of the above recursiveconstruction. More precisely suppose that Φ_n^n̅(ν)(q,Λ) has been alreadywell defined and we want to construct the newly composed convolution[N_3^(n̅(ν)+2)Φ_n^n̅(ν)(q,Λ)]. We define: [N_3^(n̅(ν)+2)Φ_n^n̅(ν)(q,Λ)]≡∫ R_G^(3)[Φ_n^n̅(ν)(q,Λ) ∏_k=1,2,3Δ_F(ℓ_k)] d^4LkHere G is the total graph representing theR.Φ.C. [N_3^(n'=n̅(ν)+2)Φ_n'^n̅] and R_G^(3) the renormalization operator. For the precise momentum assignment (following <cit.>) we consider theproduct vector space defined by: ℰ_(q,k)^4N≡ℰ_(q)^4n⊗ E_(k)^4Lwith N=n+L. We associate these notations with the set of externalindependent (q) (resp. internal(k) or integration) variables of the given R.Φ.C.[N_3^(n')Φ_n'^n̅]. The integer L indicates the number of independentloops of G (i.e. the integration variables k∈ E_(k)^4L of theR.Φ.C.). We also use the notation ℒ for the set of allinternal lines of G. The non renormalized integrand I_G(q,k), is simplythe product of the vertex functions (bubble vertices) and free propagators (simpleinternal lines) involved in the initial R.Φ.C, Φ_n'^n̅and the product of free propagators associated with N_3^(n') so,I_G(q,k)=∏_v_jH_T^n_j+1∏_i∈ℒΔ_F(ℓ_i) ∏_k=1,2,3Δ_F(ℓ_k). The argument ℓ_i, of every free propagator means the total momentumcarried by the corresponding internal line i∈ℒ associated withthe linear application λ_i: (q,k)λ_i↔ℓ_i(q,k) The precise form of the function ℓ_i(q,k) is given by the conditions ofenergy momentum conservation imposed on the momentumassignement at every vertex of G. A definition analogous to that of<ref>holds for the nonrenormalized integrand associated with every subgraph γ of G.Following <cit.>, the abbreviated notation for the renormalized integrandmeans precisely:R_G^(3)[ Φ_n'^n̅∏_kΔ_F(ℓ_k)]= ∑_U_ℱ(G)(1-t^d(G))Y_G^(3)(U_ℱ)The sum ∑_U_ℱ(G) extends over all complete forestsU_ℱ of G (with respect to a nested set 𝒮of subspaces S⊂ E_(k)^4L containing nontrivial renormalizationparts γ subgraphs of G. The functionsY_G^(3)(U_ℱ) (and the corresponding, Y_γ^(3)(U_ℱ) for every γ∈ U_ℱ) are also recursively defined in the ref.<cit.>. Notice that the degree of Taylor operators d(G) (resp.d(γ)) coincides with the superficial degree of divergence of the graphG (resp. of γ) and below (cf. proposition 2.1 ) we shall give preciseupper bounds for these numbers in terms of the asymptotic indices of the treefunctions. In an analogous way we define the renormalized Φ^4 operation[N_2^(n'-1)Φ_n'^n̅(q,Λ)].For the convergence proof of theR.Φ.C's we shall use Weinberg'scriterion of convergence applied to a certain class of Weinbergfunctions. A class of Weinberg functions (cf.<cit.>, <cit.>) f∈ C^∞(ℰ_(q)^4n) is denoted by 𝒜_4n^(α,β) andis characterized by two bounded real valued functions α(S), β(S),on the set of all linear subspaces S⊂ℰ_(q)^4n, which arecalled the "asymptotic indicatrices". In <cit.> particular classes of Weinberg functions have been introduced(the classes of admissible Weinberg functions) and they have played afundamental role for the convergence proof and good asymptotic behaviorof the renormalized convolutions by an extension of theB.P.H.Z <cit.>renormalization procedure. The study in <cit.> concerned the most generalconvolutions (the so-called G-convolutions) in aspace of arbitrar dimensions space. The corresponding graphs were defined by bubble vertices v (resp. completeinternal lines) associated with general H^n_v point functions (resp. withgeneral H^2-point functions) satisfying all appropriate A.Q.F.T. properties. Under the assumption that the general H^n_v, H^2 - pointfunctions belong to the classes of symbols of pseudodifferential operators∑_4(n_v-1)^μ_v, ∑_4^μ_iit has been proven (cf. theorem 4.1 of <cit.>) that for every such G-convolution,the corresponding renormalized integrand belongs to the class of Weinbergfunctions with the appropriate asymptotic indices. So the Weinberg'scriterion of convergence is verified. Moreover in <cit.> (resp. in <cit.>) the asymptotic behavior with respect tothe powers of external momenta (resp. with respect to powers of logarithmsalso) has been established for the G-convolutions, by using the sameprescription of renormalization. We shall use here the definition of projection of a subspace S of ℰ_(q,k)^4N ontoE_(k)^4L established in <cit.> and <cit.> concerning the classes of symbols and theclasses of admissible Weinberg functions which have been denoted by𝒜_4N^(α,β,σ,ω). We do not give here the corresponding precise definitions; we simplynotice that (σ,ω) is a couple of sets of such subspaces inE_(k)^4L and E_(q,k)^4N respectively, where under differentiationthe corresponding asymptotic indicatrices α, β decrease. We also recallthe notation π(S) for the canonical projection of a subspace S ofE_(q,k)^4N onto E_(k)^4L.Before giving the main theorem for the convergence of everyR.Φ.C we show anauxiliary statement. Let us start with some useful notations and definitions. We denote by q_T^j the setof independent momenta of the tree function H_T^n_j+1 associated withthe vertex v_j of a given graph G. We define the following linear mapping λ_ν_j:(q,k)λ_ν_j↔ q_T^j(q,k) We notice that, as previously, the function q_T^j(q,k) isprecisely defined by the momentum assignment, following the prescription of <cit.>,when all constraints of energy momentum conservation at each vertexν_j of G are taken into account. We then state the following:Given aR.Φ.C and the associated G-graph the following propertieshold: * a) Every vertex function H^n_j+1 associated with the bubble vertexv_j of G, belongs to the class𝒜_4N^(α_v_jβ_v_j,σ_v_j,ω_v_j) of admissible Weinberg functions. The sets σ_v_j, ω_v_j are defined by:σ_v_j={ S⊂ E_(k)^4L: S⊄𝒦erλ_v_j} ; ω_v_j={ S⊂ E_(k)^4N:π(S)∈σ_v_j}.For every S ⊂ℰ_(q,k)^4N the corresponding asymptoticindicatrices are given by:α_v_j(S)={[ -(n_j -3),S⊄𝒦er λ_v_j;0S⊂𝒦er λ_v_j ]} β_v_j(S)={[ 2ν_(n_j)S⊄𝒦er λ_v_j; 0 S⊂𝒦er λ_v_j; ]} * b) The non renormalized integrand I_G(q,k) associated with G (cf. equation <ref>of definition <ref>)belongs to the class of admissibleWeinbergfunctions 𝒜_4N^(α_G,β_G,σ_G,ω_G) characterized by the following sets and indices:σ_G=⋂_i∈ℒσ_i⋂_v_jσ_v_j ω_G={ S⊂𝒮_(q,k)^4N: S⊄𝒦er λ_i∀ i∈ℒ, π(S)∈σ_G} and for every S⊂ℰ_(q,k)^4N : α_G(S)=∑_v_jα_v_j(S)+ ∑_i∈ℒμ_i(S) Here∀i∈ℒ,μ_i(S)={ -2S⊄𝒦er λ_i 0 S⊂𝒦er λ_i}andβ_G(S)=∑_v_jβ_v_j(S)* c) Analogous result holds for the non renormalized integrand associated withevery subgraph γ of G. * d)The degree d(G) (resp. d(γ)) of the Taylor operatorsassociated with G (resp. with γ) in formulas (<ref>) is bounded as follows:d(G)≤ 2 ;d(γ)≤ 2).Proof of proposition <ref> The proof is obtained by application of the previous definitions (inparticularthe definitions of the admissible classes of Weinberg ) and is a direct consequence of <cit.> and<cit.>.Notice that for every n≥3 the function δ_n(q,Λ) belongs to the class of Weinberg functions𝒜_4N^(0, 0,σ_ν j,ω_ν j).We use the notation|ℒ| for the total number of internal lines of G we have :d(G)=-2|ℒ|+max_S∑_ν_jα_v_j(S)+4L≤ 2 ▪ Taking into account the above results we notice that the conditions for thenon renormalized integrand of G established in <cit.> are all verified.So we are allowed to apply directly the corresponding theorems ofrefs. <cit.>, <cit.> and <cit.> in order to obtainthe following resultthat we present without proof :Every R.Φ.C. Φ_n^n̅(q,Λ), (q∈ℰ_(q)^4n) with n external independentvariables and L integrationvariables k∈ E_(k)^4L,(with Λ a fixed realpositive number) verifies the following properties: (i) Defined as an integral of k∈ E_(k)^4L, it is absolutely convergent and itbelongs to the class 𝒜_4n^(α_n,β_n) of Weinberg functions with the following precise asymptotic indicatrices:[∀ S ⊂ℰ_(q)^4n :; α_n(S)=d(G) ;β_n(S)=ν̃_(n);] (ii) It satisfies Euclidean invariance and all linear axiomatic fieldtheory properties of a general n+1-point function in complex Minkowski space. §.§The Generalized Renormalized Φ- Convolution (G.R.Φ.C.)A Generalized R.Φ.C. (G.R.Φ.C.) is defined as the image of different Φ_4^4 operations on an arbitrary finite sum of R.Φ.C's.The corresponding to a G.R.Φ.C, G_Φ graph contains sums of disconnected graphs associated to each one of the connected components R.Φ.C's. The renormalization operatorR_G corresponding to the operations N_3^(n) and N_2^(n-1)respectively is defined as a consistent extension of the scheme presented previously for the R.Φ.C's. More precisely, the renormalized integrand (analogof formula <ref> ) corresponding to the convolution[N_3^(r)Φ_(r)^(n)] reads: R_G_Φ^(3)[Φ_(r)^(n)(H) ∏_kΔ_F(ℓ_k)]= ∑_U_ℱ(G_Φ(H)) (1-t^d(G_Φ))Y_G_Φ^(3)(U_ℱ(H)) Here, the argument H means that the summation is multiple because the finestructure of the corresponding H∈ℬ must be taken into account andevery H^n_j+1-function associated with the bubble vertex v_j ofG_Φ, must be expanded in its termsR.Φ.C's (cf. def.<ref>). In other words, the total graph G_Φ(H) is, in fact, a sum ofdisconnected graphs G^* coming from the expansion of all bubble vertices intheir disconnected components graphs G associated with theR.Φ.C's involved in the definition of the corresponding H^n_j+1's.Therefore the sum in <ref> contains all possible nontrivial individualforests of every such component G^*.Notice that now, the associated vertex functions are not ingeneral tree functions. For simplicity we keep the same mode of notation for the number n of the initial function H^n+1 (superscript) and respectively the number r of external independent variables(subscript) as in the definition <ref> of the R.Φ.C's. §.§ The Banach space ℬ_R⊂ℬWe say that a sequenceH∈ℬ belongs to the linear subspace ℬ_R⊂ℬ, if ∀n=2k+1, k∈ the corresponding H^n+1 function is a(G.R.Φ.C.)in the sense of definition <ref>. We introduce thefollowing positive mapping onℬ_R 𝒩:ℬ_R→^+ H↦‖ H‖ Here [ H↦‖ H‖‖ H‖=sup_Λ{sup_n, q{| H^n+1|/ M_n;|∂^(0,1)/∂ q^2| N_3H^4|/M̂_3^(0,1);| N_2H^n+1|/M̂_(n,2);|γ|/N_γ}};∀ (q,Λ)∈ℰ_(q)^4n×^+*; M_1(q)= γ_max ( q^2+m^2)(1+6(q^2+m^2)^π^2/54);N_γ= γ_max∏_i=1^3M_1(q_i)Δ_F(q_i)|_q^2=0; γ_max=1+9Λ (1+6Λ^2); M_3(q)= 6Λ∏_i=1^3M_1(q_i)Δ_F(q_i);M̂_3^(0,1)(Λ)= 6Λ sup_ q[|∂/∂ q^2[N_3Ĩ_1,3]| M_1;| [N_3Ĩ_1,3]| M_1 ];Ĩ_1,3=∏_i=1^2M_1(k_i) [Δ_F(k_i)]^2Δ_F(k_1+k_2+q);; M̂_(3,2)= 6Λ[N_2^(3)Ĩ_1,2]∏_i=2^3M_1(q_i)Δ_F(q_i); Ĩ_1,2=M_1(k)[Δ_F(k)]^2 Δ_F(k+q);;M̂_(n,2)=[N_2^(n)]̃_q=0n(n-1)δ_n, max(Λ)M_n-2 M_1(q_1 )Δ_F(q_1 ); M_n= n(n-1)δ_n, max(Λ)M_n-2(q_(n-2),Λ)Δ_F(q_(n-2))∏_i=2^3M_1(q_i)Δ_F(q_i) ] Now, one easily verifies that 𝒩 defines a finite norm on ℬ_R, and that ℬ_R is a complete metric space with respectto the induced distance (of uniform convergence) so the following is established: ℬ_R is a Banach space with respect to the distance associated with the norm 𝒩 of definition <ref>.We cconclude this section with a crucial result for the subsequentsections. It ensures the good convergence, asymptotic behavior,Euclidean and linear A.Q.F.T. (in complex Minkowski space)properties of the G.R.Φ.C's - H∈ℬ_R. The system of equationspresented in.def.<ref> of the introductionconstitutes a welldefined non linear mappingℳ:ℬ_Rℳ→ℬ_R inthe following sense : a) For every n, the good convergence of integrals, asymptotic behaviour, symmetry and Euclidean invariance of H^n+1 (G.R.Φ.C's),∀ q∈ℰ_(q)^4n, are preserved by ℳ(H). b) For every n, the corresponding Green's function H^n+1'(q) (the image under ℳof H^n+1(q)), verifies the analyticity (primitive domain) and algebraic A.Q.F.T. (<cit.>) properties in complex Minkowski space. c) The G.R.Φ.C's -H^2 functions, which depend on only one external variable q^2, Φ_(r)^(n)(H) satisfy and conserve under the action of ℳ the real analyticity character for every q∈ℰ_(q)^4 and at q^2+m^2=0. The same property holds for all order derivatives of Φ_(r)^(n)(H). Proof of theorem <ref> a) The proof follows directly byapplication of the previous definitions, ofproposition <ref>, and of theorem <ref>(the latterbeing applied to every graph G^* component of G_Φ(H)). b) The verification of axiomatic field theory propertiesare obtained as a trivial application of a most general result of<cit.> concerning Φ^4 type renormalized convolutions. The Euclidean invariance andsymmetry of every R.Φ.C,Φ_(r)^(n)(H) can be verified in another more direct way.Following the recursive construction presented in definition <ref> for theR.Φ.C's and by choosing an appropriate coordinate transformation(spherical coordinates in four dimensions) we can eliminate by integration all angulardependence. Then, the limit of the total multiple integration atq^2+m^2=0 is (at fixed Λ) a real finite positive number. The analogousresults hold for every order derivatives, with respect to q^2 (at thepoint q^2+m^2=0) The real analyticity propertycomes from thefact that the primitive domain of analyticity of every Φ_(r)^(n)(H) contains the corresponding Euclidean region. c) Theproof is a direct consequence of the properties a) and b).§ THE SUBSET Φ_R -THEMAPPING M^* - THEΦ_4^4 ITERATION§.§ The subset Φ_R⊂ B_R In this subsection wedescribe the subset Φ_R⊂ℬ_R which ischaracterized by the splitting and sign properties (tree-structure), together with thephysical conditions implemented by the renormalization (which isassociated with the four-or three-dimensional problem). The "splitting" orfactorization properties are the analogs of the properties displayed bythe Φ-subset defined previously in the case of the zero-dimensionalproblem of <cit.>. As it should become evident, apart from therenormalization constraints, the structure of Φ_R given here can entirelybe applied to smaller dimensions 1≤ r≤ 3, with non-zero external momenta. The subset Φ_R ⊂ B_RWe say that a sequence H ∈ℬ_R belongs to thesubset Φ_R, if the following properties are verified: *∀(q,Λ)∈(ℰ_(q)^4×^+*)[ H^2 (q,Λ)= (q^2+m^2)(1+δ_1(q,Λ)Δ_F ); ; lim_(q^2+m^2)= 0δ_1(q,Λ)Δ_F(q)=0 lim_(q^2+m^2)= 0 H^2 Δ_F(q)=1; ; H^2_min(q)≤ H^2(q,Λ) ≤ H^2_ (max)(q,Λ); ;H^2_ (max)(q,Λ)= γ_max (( q^2+m^2)+6Λ^2(q^2+m^2)^π^2/54);; H^2_min(q)=q^2+m^2 ] * For everyn=2k+1, k∈^*the function H^n +1, belongs to the class𝒜_4n^(α_nβ_n) of Weinberg functions such that ∀S⊂ℰ_(q)^4n the corresponding asymptoticindicatrices are given by:α_n(S)={[ -(n-3) S⊄𝒦er λ_n; 0S⊂𝒦er λ_n; β_n(S)=nβ_1∀S⊂ℰ_(q)^4n;]} *There is an increasing and bounded (with respectto n)associated positive sequence(cf. definition <ref>): {δ_n(q,Λ)}_n=2k+1,k∈^*,of splitting functions ∈𝒟 which belong to theclass 𝒜_(n)^(0, 0) of Weinberg functions for every n≥ 3 such that H is a tree type sequence in the sense of definition <ref>. More precisely: * i) ∀(q,Λ)∈(ℰ_(q)^12×^+*)[ H^4 (qΛ) = -δ_3(q,Λ) ∏_ℓ=1,2,3H^2(q_ℓ,Λ)Δ_F(q_ℓ);δ_3(q,Λ)∼_q→∞^Λ;q̃∈ℰ_(q)^12 lim_Λ→ 0δ_3(q̃,Λ)/Λ=6; ;∀Λ∈^+*δ_3,min(Λ) ≤δ_3(q̃, Λ)≤δ_3,max(Λ) ] * ii) For every n≥ 5 and ∀ (q,Λ)∈(ℰ_(q)^4n×^+*) :[ H^n+1(q,Λ)=δ_n(q,Λ)C^n+1(q,Λ)/3Λ n(n-1); δ_n(q,Λ)∼_q→∞^Λ;; q̃∈ℰ_(q)^4n;lim_Λ→ 0δ_n(q̃,Λ)/Λ∼ 3n(n-1);; ∀Λ∈^+*,δ_n,min(Λ) ≤δ_n(q̃, Λ)≤δ_n,max (Λ) ] Here{δ_n,min}, (but not {δ_n,max}) (cf. equations <ref>) are the splitting sequences lowerbounds of thesolution of the zero dimensional problempreviously presented in definition <ref>). * iii) Moreover there is a finite numberδ_∞∈ ^+ a uniform boundindependent of H such that : lim_n→∞δ_n(q̃, Λ)≤δ_∞ ∀ Λ∈^+* * The renormalization functions a,ρ and γ, appearing in thedefinition of ℳ are well defined real analytic functions ofq^2 and Λ, and yield at the limits (q^2+m^2)=0andq=0the physical conditions of renormalizationrequired by the two-point and four point functions:[ a (q, Λ)= [N_3^(3)H^4(q, Λ)]ã(Λ)=lim_(q^2+m^2)= 0[N_3^(3)H^4(q, Λ)]; a_min(Λ)≤ã(Λ) ≤a_max(Λ) ] [ ρ(q, Λ)= [ ∂/∂ q^2[N_3^(3)H^4(qΛ)] ], ρ̃(Λ)=lim_(q^2+m^2)= 0ρ(q, Λ);ρ_min(Λ)≤ρ̃(Λ)≤ρ_max(Λ) ][γ(q,Λ)=[-6Λ∏_l=1,2,3H^2(q_l)Δ_F(q_l)/H^4(q)];; γ̃(Λ)= [ γ(q,Λ)]_q=0γ_min(Λ)≤γ̃(Λ)≤γ_max(Λ); ]*We first remark that the "splitting" or factorization properties ii) and iii) aregeneral formulae which simply define the functionsδ_n(q,Λ) and they canformally be written for every sequence H ofℬ_R. The particular character of the subset Φ_R comesfrom the fact that the splitting sequence {δ_n}∈𝒟, is such that ∀ n=2k+1, k≥ 1 the correspondingsplitting function δ_n(q,Λ)belongs to the class 𝒜_(n)^(0, 0) of Weinberg functions and verifies the limit and asymptotic properties of definition <ref>. *We point out thatthe symbol ∼_q→∞^ isusedasan abbreviated notation of the fact that both sidesof the appropriate relations belong to the sameclass of Weinberg functions, or to put it differently, they have anasymptotically equivalent behavior.§.§ The non triviality of the subset Φ_R The subset Φ_R is a nontrivial subset of ℬ_RProof of theorem <ref> We consider the fundamentalsequence H_T0 (cf. definition <ref>) and verify successively all properties of Φ_R. For details cf. the reader in Appendix <ref>§.§ The new mapping M^* on Φ_R and equivalence with ℳ LetH ∈ Φ_R. The following mapping,[ ℳ^*:Φ_R M^*⟶ℬ_R;H↦ℳ^*(H); ]defined by equations <ref>…<ref>, is equivalent to the mapping M (cf. equations<ref> of the introduction).i)[ a'(q,Λ)=[N_3^(3)H^4(q,Λ)]; ρ'(q,Λ)=-Λ[∂/∂ q^2[N_3^(3)H^4(q,Λ)]];; γ'(q,Λ)= [-6Λ∏_l=1,2,3H^2(q_l)Δ_F(q_l)/H^4(q)]; ;ã'(Λ)=a'(q,Λ)|_(q^2+m^2=0); ρ̃'(Λ)=ρ'(q,Λ)|_(q^2+m^2=0);, γ̃'(Λ)=γ'(q,Λ)|_q=0 ] ii)[H^2' (q,Λ)= (q^2+m^2)(1+δ'_1(q,Λ)Δ_F(q^2)); δ'_1(q,Λ)Δ_F(q^2)=- ρ̃-Λ{ N^(3)_3H^4 -ã H^2 (q,Λ)Δ_F(q^2)}Δ_F(q^2)/ (γ̃+ρ̃) ] iii) [ H^4'(q,Λ)=-δ'_3(q,Λ) ∏_ℓ=1, 2, 3H^2'(q_i,Λ)Δ_F(q_i); ; δ_3'(q,Λ)=6Λ/(γ̃+ρ̃)+D_3(H)- Λã; ;D_3(H)=|B^4|- |A^4| / |H^4| ] iv) for every n≥ 5 : [H^n+1'(q,Λ)=δ'_n(q,Λ)C^n+1'(q,Λ)/3Λ n(n-1); with: δ'_n(q,Λ)=3Λ n(n-1)/(γ̃+ρ̃)+D_n(H)-Λã ] andC^n+1' is obtainedrecursively, in the usual way, from the sum of all the partitions of the products ∏_l=1,2,3 N^(i_l)_1H^i_l+1 (q_i_l,Λ) ∀ i_l≤ n-2Notice thatin the denominators of equ. <ref>, we defined thefunction D_n(H) by: D_n(H)= |B^n+1|- |A^n+1|/ |H^n+1|where, in view of thehypothesis H∈Φ_R (sign properties) we used the absolute values. Proof of proposition <ref>Taking into account the infinite system of equations <ref> of theintroductionand the splitting or factorization properties ii) and iii) in Φ_R(cf. also remarks <ref>), we write:i)[H^2' =(q^2+m^2)(1+δ'_1(q,Λ)Δ_F(q^2); = - Λ(γ̃+ρ̃){ N^(3)_3H^4-ã H^2Δ_F(q) } + (q^2+m^2)γ̃/(γ̃+ρ̃); ; (q^2+m^2) (γ̃+ρ̃) + (γ̃+ρ̃)δ'_1 =-Λ{ N^(3)_3H^4- ã H^2Δ_F(q)} +(q^2+m^2)γ̃; ;δ_1'Δ_F=- ρ̃-Λ{ N^(3)_3H^4 -ã H^2Δ_F }Δ_F /(γ̃+ρ̃)▪ ]ii) and iii) In an analogous way:[∀ n ≥3, (q,Λ) ∈ E^4n_(q)×^+;H^n+1(q,Λ) =1(γ̃+ρ̃){A^n+1+ B^n+1+ C^n+1(q,Λ) + ΛãH^n+1(q,Λ)};H^n+1=δ_n(q,Λ)C^n+1/3Λ n(n-1); ; H^n+1 =1(γ̃+ρ̃) A^n+1+ B^n+1+ Λã H^n+1 + H^n+13Λ n (n-1)/δ_n (γ̃+ρ̃); ; δ_n{ (γ̃+ρ̃) H^n+1- A^n+1+ B^n+1+ ΛãH^n+1}=H^n+13Λ n (n-1); ;δ'_n(q,Λ)=3Λ n(n-1)/(γ̃+ρ̃)+D_n(H)-Λã ▪ ]Notice that as far as the renormalization parameters a, ρ and γ are concernedthe corresponding equations of the mapping M^* arethe same as in <ref>. §.§ TheΦ_4^4- iteration By successive application of the mapping ℳ^* to the fundamental sequence H_T0 we construct a sequenceof G.R.Φ C's:Φ_ν(H_T0)=ℳ^*(Φ_ν-1(H_T0)) the so called Φ_4^4-iteration. The following theorem shows recurrently that this sequence is a subset of Φ_R and automatically constitutes a neighbourhood of the fundamental sequence H_T0. Then in the next section we show, by a contractivity argument, the convergence of the Φ_4^4- iteration to the unique non trivial solution inside a precise closed ball S_r(H_T0)⊂ Φ_R⊂ B_R.The “stability”Every orderΦ_ν(H_T0) of the Φ_4^4- iteration belongs to Φ_R. a) The zero order of the Φ_4^4- iteration being the sequence H_T0 we establish the recurrence starting from the order ν=1. The arguments of the proof of order 0 → 1 being similarwe onlypresentthemfor the transition ν-1 →ν order of the Φ_4^4-iteration. In order to simplify the notations we often omit the arguments (q, Λ) and (q^2). b) For the proof of the stability we use the following auxiliary statementsverifiedwhen Φ_(ν)(H_T0) belongs to Φ_R. For the proof of them we refer the reader to Appendix <ref>.§.§.§ The signs and boundsLet Φ_(ν )(H_T0)∈Φ_R then ∀Λ≤ 0.05:i)∀ q∈ℰ_(q)^4[H^2_(ν)(q,Λ)>0;;H^2_(ν)(q,Λ)Δ_F≤1+6Λ^2(log(q^2+m^2))^β_1,(ν); β_1,(ν)= β_1,(ν-1)3+1;,H^2_min=q^2+m^2;H^2_min< H^2_(ν)≤H^2_(ν,max); H^2_(ν,max)=γ_max((q^2+m^2)+6Λ^2(q^2+m^2)^1/3∑_k=1^ν 1/k^2);; lim_ν→ ∞H^2_(ν,max)≡ H^2_(max)=γ_max[(q^2+m^2)+6Λ^2(q^2+m^2)^π^2/54 ] ] ii) Theglobal term“(`Φ_4^4 operation”)C_(ν )^n+1(q,Λ) = - 6Λ∑_ϖ_n(I)∏_l=1,2,3 N^(i_l)_1H_(ν)^i_l+1 (q_i_l,Λ)given by definition<ref>verifies the following properties: a. The “good sign” property:∀ n=2k+1(k≥ 1) C_(ν)^n+1=(-1)^n-1/2|C_(ν)^n+1| b. Itis a R.Φ.C. in the senseof definition <ref> consequently it verifies Euclidean invariance and linear axiomaric quantum field theory properties.c. For every n=2k+1, k≥ 1the function C_(ν)^n+1(q, Λ) , belongs to the class𝒜_4n^(α_nβ_(n,ν)) of Weinberg functions such that ∀S⊂ℰ_(q)^4n the corresponding asymptoticindicatrices are given by:α_n(S)={[-(n -3), S⊄𝒦er λ_n; 0 S⊂𝒦er λ_n ]} β_(n,ν)= β_(1,ν)n ∀S⊂ℰ_(q)^4n d) For every n=2k+1, k≥1[|C^n+1_min(q, Λ)|≤|C_(ν)^n+1(q,Λ)|≤|C^n+1_(ν, max)(q, Λ); :; C^n+1_(max)= 3Λn(n-1) 𝒯_n |H^n-1_max|∏_l=2,3H^2_(max)(q_l,Λ)Δ_F(q_l ); ]Notice that in the last formula we takeinto accountthe result of ref. <cit.> about the number 𝒯_nof different partitions inside the tree terms. iii)[ ∀ n=2k+1(k≥ 1) H_(ν)^n+1=(-1)^n-1/2|H_(ν)^n+1| ] iv)∀ n=2k+1(k≥ 1) |H^n+1_min|≤| H_(ν)^n+1|≤| H^n+1_(ν,max)| Here H^n+1_(ν,max) is recurrently defined as follows:[H^4_(ν,max)=-δ_3,max∏_l=1,2,3H^2_(ν,max)Δ_F(q_l,Λ);lim_ν→ ∞H^4_(ν,max)≤ H^4_(max)≡ 6Λ∏_l=1,2,3H^2_(max)(q_l,Λ)Δ_F(q_l );∀ n=2k+1k≥ 2;;|H^n+1_(ν,max)|=δ_n,max𝒯_n |H^n-1_(ν,max)|∏_l=1,2H^2_(ν,max)(q_l,Λ)Δ_F(q_l ); lim_ν→ ∞H^n+1_(ν,max)=| H^n+1_(max)|≡δ_n,max𝒯_n |H^n-1_max|∏_l=2,3H^2_(max)(q_l,Λ)Δ_F(q_l ); |H^n+1_(min)|= δ_n,min𝒯_n |H^n-1_min|∏_l=2,3H^2_(min)(q_l,Λ)Δ_F(q_l ) ] §.§.§ The properties of the global terms B_(ν )^n+1, A_(ν )^n+1 Let Φ_(ν)(H_T0)∈Φ_R.Underthe condition 0≤Λ≤ 0.05, the global term B_(ν )^n+1(H)given by definition<ref> precisely:[ B_(ν)^n+1(q,Λ) = - 3Λ∑_ϖ_n(J) N^(j_2)_2H_(ν )^j_2+2 N^(j_1)_1H_(ν )^j_1+1(q,Λ);=∑_ϖ_n(J)H_(ν )^j_1+1Δ_F∫ R^(2)_G [H_(ν )^j_2+2∏_i=1,2Δ_F(l_i)]d^4k ] verifies the following properties: i) the “opposite sign” property:∀ n=2k+1, (k≥ 1) B_(ν )^n+1 =(-1)^n+1/2|B_(ν )^n+1| ii) Itis a R.Φ.C. in the senseof definition <ref> consequently it verifies Euclidean invariance and linear axiomaric field theory propertiesas follows from theorem <ref>.iii) For every n=2k+1, k≥ 1the function B_(ν)^n+1(q, Λ) , belongs to the class𝒜_4n^(α_nβ_(n,ν)) of Weinberg functions such that∀S⊂ℰ_(q)^4n the corresponding asymptoticindicatrices are given by:α_n(S)={[-(n -3), S⊄𝒦er λ_n; 0 S⊂𝒦er λ_n ]} β_(n,ν)= β_(1,ν)n ∀S⊂ℰ_(q)^4n iv)∃ a splitting - sequence δ_ν^B={δ^B_n, ν(q,Λ)}_n∈𝒟 such thatfor everyn≥3 the following properties are verified: * a) [ δ^B_n, ν(q,Λ)∼_q→∞^Λ; ; ; B_ν^n+1(q̃,Λ)= -δ^B_n, ν(q̃,Λ) n(n-1)H_ν^n+1(q̃,Λ) ] b) For all n=2k+1, k≥ 1the function -δ^B_n, ν belongs to the same class of Weinberg as the correspondingsplitting function δ_n precisely:δ^B_n, ν∈ 𝒜_4n^(α̃_n=0,β̃_n=0)∀S⊂ℰ_(q)^4n v) ∀(q̃, Λ̃)∈(ℰ_(q)^4n×]0, 0.05]), the sequence:{δ̃^B_n}= |B^n+1_min|/ n(n-1)|H^n+1_max|increases with increasing n.vi) ∀(q̃, Λ̃)∈(ℰ_(q)^4n×]0, 0.05]), [ |B^n+1|≤ |B^n+1_max|; ; |B^n+1_max| = 3Λ n(n-1/2δ_n,max𝒯_n N_2 |H^n-1_( max)|∏_l=1,2H^2_( max)(q_l,Λ)Δ_F(q_l ) ] Let Φ_(ν)(H_T0)∈Φ_R. Underthe condition 0≤Λ≤ 0.05, the global term (“Φ_4^4 operation”)A^n+1_(ν)(H)= -Λ N^(n+2)_3H_(ν)^n+3 = -Λ∫ R^(3)_GH_(ν)^n+3∏_i=1,2,3Δ_F(l_i)d^4k_1d^4k_2given by definition<ref> verifies the following properties: i) the “good sign” property:∀ n=2k+1(k≥ 1) A^n+1=(-1)^n-1/2|A^n+1| ii) Itis a R.Φ.C. in the senseof definition <ref> consequently it verifies Euclidean invariance and linear axiomaric field theory propertiesas follows from theorem <ref>iii) For every n=2k+1, k≥ 1the function A^n+1(q, Λ) , belongs to the class𝒜_4n^(α_nβ_n) of Weinberg functions such that ∀S⊂ℰ_(q)^4n the corresponding asymptoticindicatrices are given by:α_n(S)={[ -(n-3), S⊄𝒦er λ_n; 0 S⊂𝒦er λ_n ]} β_n(S)={[2(n+1) S⊄𝒦er λ_n; 0 S⊂𝒦er λ_n ]} iv) ∃ asplitting - sequence δ^A={δ^A_n(q,Λ)}_n∈𝒟 such thatfor everyn≥3the following properties are verified:a)[ δ^A_n(q,Λ)∼_q→∞^Λ^2;;; |A^n+1_ν-1|(q̃,Λ)≤δ^A_n,ν-1(q̃,Λ) n(n-1)|H^n+1_max|(q̃,Λ) ] b)For all n=2k+1, k≥ 1the function δ^A_n belongs to the same class of Weinberg as the correspondingsplitting function δ_n precisely:δ^A_n∈ 𝒜_4n^(α̃_n=0,β̃_n=0)∀S⊂ℰ_(q)^4n v) ∀(q̃, Λ̃)∈(ℰ_(q)^4n×]0, 0.05]) the sequence{δ̃^A_n}_n=2k+1, k≥ 1= |A^n+1_max|/ n(n-1|H^n+1_max| decreases with increasing n. Let H_ν∈Φ_R then, for every n≥ 3 and ∀(q̃, Λ̃)∈(ℰ_(q)^4n×]0, 0.05])there exist positivecontinuous functions of(q̃, Λ̃),D_n,min(q̃, Λ̃), D_n,max(q̃, Λ̃), independent of H, such that the function D_n,ν(H) defined as follows : [ D_n,ν(H)= 3Λ|∑_ϖ_n(J) [N_2H^j_2+2_ν][N_1H^j_1+1_ν]|/| H^n+1_ν|- |Λ[N_3^(n+2)H^n+3_ν]|/| H^n+1_ν|;; D_n,ν(H)=| B^n+1_ν|-| A^n+1|/| H^n+1| ]verifies the following properties:lim_Λ→ 0D_n(Λ)=0 D_n,min(q̃, Λ̃)≤ D_n (H(q̃, Λ̃))≤D_n,max(q̃, Λ̃)Moreover,there is a positive finite constant δ_∞^Λ such thatlim_n→∞ D_n,min(q̃, Λ̃)/3Λ n(n-1)= 1/δ_∞^Λ §.§.§ Proof of proposition <ref>By application of propositions <ref> and <ref> the properties <ref>, <ref>, and <ref> are directly verified§.§.§ Proof of theorem <ref>* i) We have successively: [ a_ν(q,Λ)=[N_3H^4_(ν -1)]; and ã_(ν)(Λ)= [ a_(ν)]_q^2+m^2=0; ; ρ_ν(q,Λ)=-Λ[∂/∂ q^2[N_3^(3)H^4_(ν -1)]]andρ̃_(ν)( Λ)= [ ρ_(ν)]_q^2+m^2=0; ; γ_ν(Λ) = 6Λ∏_ℓ=1,2,3 H^2_ν-1(q_ℓ,Λ)/H^4_(ν -1) (q,Λ) γ̃_(ν)= [γ_(ν)(q, Λ) ]_q=0; ; ; ; ∀(q,Λ)∈(ℰ_(q)^4×^+*);H^2_(ν)= (q^2+m^2)(1+δ_1,νΔ_F(q^2)); δ_1,ν(q,Λ)Δ_F(q^2)=ρ̃_(ν -1)-Λ{ N_3H^4_(ν -1) -ã_(ν -1)H^2_(ν -1)Δ_F(q^2)}/(γ̃_(ν -1)+ρ̃_(ν -1)) ]* ii) ∀(q,Λ)∈(ℰ_(q)^12×^+*)H^4_ν (qΛ) = -δ_3,(ν)(q,Λ) ∏_ℓ=1,2,3H^2_(ν)(q_ℓ,Λ)Δ_F(q_ℓ) ; [ δ_3,(ν)(q,Λ)=6Λ/(γ̃_(ν -1)+ρ̃_(ν -1))+D_3,(ν-1)(H)-Λã_(ν-1);;D_3,(ν-1)(H)= | B^4_(ν-1)|-| A^4_(ν-1)|/| H^4_(ν-1)| ][δ_3,(ν)(q,Λ)∼_q→∞^Λ; lim_Λ→ 0δ_3,(ν)(q,Λ)/Λ=6, q∈ℰ_(q)^12 ]Moreover, for every finite fixed q̃∈ℰ_(q)^12and ∀Λ∈^+*δ_3,min≤δ_3, (ν)(q̃, Λ)≤δ_3,max* iii) For every n≥ 5: ∀(q,Λ)∈(ℰ_(q)^4n×^+*) H^n+1_ν(q,Λ)=δ_n,(ν)(q,Λ)C^n+1_ν(q,Λ)/3Λ n(n-1)with δ_n,(ν)(q,Λ)=3Λ n(n-1)/(γ̃_(ν -1)+ρ̃_(ν -1))+D_n,(ν-1)(H)-Λã_(ν-1) Here, in the denominators of eq. <ref>, we defined thefunction D_n,(ν-1)(H) by:D_n,(ν-1)(H)=| B^n+1_(ν-1)|-| A^n+1_(ν-1)|/| H^n+1_ν-1|Notice that one is allowed to use the absolute values in view of thehypothesis H_ν -1∈Φ_R Then, the proof of theorem <ref> is obtained by application of the particular properties of theglobal terms C^n+1,B^n+1, A^n+1,D_n presented by propositions <ref>, <ref>, <ref>, and <ref> that we show in Appendix <ref>. § THE Φ_4^4 NONTRIVIAL SOLUTIONIn this section we present the construction of the unique nontrivialsolution of the renormalized Φ_4^4 equations of motion representedby the mapping ℳ^*: We define a closed ballS_r(H_T0) ⊂Φ_Rthe center of which is the "fundamental" tree type sequence H_T0 ( introduced in section 3). We show the local contractivity of ℳ^* inside this neighbourhood of H_T0and consequentlythe existence and uniqueness of a fixed point of the initial mapping ℳinside Φ_R. For the construction of the solution we propose aniteration of the mapping ℳ^* starting from H_T0. §.§ Theclosed ball S_r0(H_T0) ⊂Φ_Rr(0)=sup_Λ,n,q[{δ_n,max-δ_n,min}/δ_n,max;| H^2_max- H^2_min|/H^2_max ;|∂^(0,1)/∂ q^2| N_3H^4_max-N_3H^4_min|/M̂_3^(0,1)]Here the notation ∂^(0,1) means either zeroor first order partial derivativeS_r(0)(H_T0)={H∈Φ_R:‖ H - H_T0‖≤ r(0) }§.§ The local contractivity in S_r(0)(H_T0) ⊂Φ_R* i) The subset (closed ball) S_r(0)(H_T0) ⊂Φ_R is a complete metricsubspace of B_R. * ii) There exists a finite positive constantΛ^*(≈ 0. 04)such thatwhenΛ∈]0, Λ^*] the mapping M^* is contractive inside S_r(0)(H_T0) ⊂Φ_R via the Φ_4^4- iterationso,* iii) The unique nontrivial solution of the Φ_4^4equations of motion lies in the neighbourhood S_r(0)(H_T0)of the fundamental sequence H_T0 andis constructed as the limit of the Φ_4^4- iteration. Proof of theorem <ref> (i) By definition, the ball S_r(0)(H_T0) is a closedsubsetof the Banach space ℬ_R so it is also a complete subspace. (ii) In Appendix <ref> we give the proof of the local contractivity of ℳ^*inside the closedball S_r(0)(H^T_0) ⊂Φ_R via the Φ_4^4- iteration.In other words we show that, at a given order ν of the Φ_4^4- iteration and whenΛ∈ ]0, 0.04], there existtworeal positive continuousfunctions of Λ, K(Λ)<1, k(Λ)<1such that:‖ℳ^*(H^T_0)-H^T_0‖ ≤ k(Λ) r(0) [ ‖ℳ^*(H_(ν-1))-ℳ^*(H_(ν-2)) ‖ ≤ K(Λ) ‖ H_(ν-1)-H_(ν-2)‖; ;k(Λ)+K(Λ)) <1 ] (iii) This result is a direct consequence of (ii). § REFERENCES99 (Q.F.T.) a) A.S. Wightman Phys. Rev. 101, 860 (1965) b) R. Streater and A.S. WightmanPCT Spin Stat.and all That (Benjamin, New York,1964) c) N.N. Bogoliubov, A.A. Logunov, and I.T. Todorov Introduction to the Axiomatic Q.F.T. (Benjamin, New York, 1975) d) R. Jost. The General Theory of Quantized Fields(American Math.Society, Providence, RI, 1965) e) N.N. Bogoliubov, D.V. Shirkov, Introduction to the Theory of Quantized Fields (Interscience, New York, 1968) MM1M. ManolessouJ. Math. Phys.a)202092 (1988)b) 30 175 (1989)c)30907 (1989)d) 32 12 (1991) MM2M. Manolessou a) Nucl. Physics B (Proc. Suppl.) 6 (1989) 163-166 North-Holland c) Contribution to the XI^th International Congress of Math. Physics Unesco-Sorbonne (D. Iagolnitzer editor 1994) G.J.J. Glimm and A. Jaffe a) Phys. Rev. 176, 1945 (1968)b) Commun. Math. Phys.11, 99 (1968)c) Bull. Am. Math. Soc, 76, 407 (1969)d) Acta. Math. 125, 203 (1970) e)Stat. Mech. and Quantum Field TheoryLes Houches,1970 (1-108) (Gordon and Breach, N.York, 1971)G.J.S. J. Glimm and A. Jaffe, and T. Spencer. Constructive Quantum Field Theory,Lecture Notes in Phys.Vol.25, G.Velo and A. Wightman (Springer, 1973) Sym K. Symanzik, J.Math.Phys. 7, 510 (1966) Zim1 W. Zimermann,Commun. Math .Phys. a) 6, 161 (1967)b) 10, 325 (1968) DysF.J. Dyson, Phys. Rev. 75, 486, 1736 (1949) SchJ. Schwinger, Phys. Rev. 75, 651, 76 (1949) MM3M. Manolessou, Ann. Phys. (NY) 152, 327 (1984) Br.MMJ. Bros and M. Manolessou-Grammaticou, Commun. Math. Phys. 72 (1980) 175-205, 207-237 MM4 M. Manolessou-Grammaticou, Ann. Phys.(NY)122,(1979) MDu M. Manolessou and B. Ducomet, Ann. Inst. H. Poincaré Vol.40, 4 (1984) Jaf A. Jaffe, Commun. Math. Phys. 42, 281(1965) MM6M. Manolessou Local Contractivity of the Φ^4_0 mapping http://arxiv.org/abs/1212.3693 GASM A. Alaie, Y. Sansonnet, S. Gladkoff and M. Manolessou,J. Nonlin. Math.Phys. a) Electronic Version 9 1 Febr. 2002b) Printed version 9 2002 77-85MMST M. Manolessou and S. Tafat Numerical study of the local contractivity of theΦ^4_0 mapping http://arxiv.org/abs/1212.3697 VorA.Voros, Private communication CEN Saclay (1983) G.J.1 J. Glimm and A. Jaffe,Commun. Math. Phys. 22, 253 (1971) MM5 M. Manolessou, “The Positivity ofthe Φ^4_0solution” Preprint E.I.S.T.I., July (1998) O.S. K. Osterwalder and R. Schrader, Commun. Math. Phys. 31,83 (1973) W1 A.Wightman, (Private communication,Princeton-IHES (1983) MM7M. Manolessou The Osterwalder-Schrader Positivity of the Φ^4_4 solution” Preprint EISTI (in preparation) LassM. Lassalle, Commun. Math. Phys. 36,1856 (1974) WeinS. Weinberg, Phys. Rev. 118, 838 (1960) Bog.ParN. Bogoliubov and O.S. Parasiuk, Doklady Akad. Nauk URSS 100, 25 (1955a) DuB. Ducomet, Ann. Inst. H. Poincaré Vol.41, 1 (1984) Br.Ru.Ar a) H. Araki J. Math. Phys. 2,163 (1961). Suppl. Progr. Theor. Phys. 18 (1961)b)J. Bros, Analytic methods in Mathematical PhysicsNew York Gordon Breach c) D. Ruelle Nuovo Cimento 19, 356 (1961) 2379 (1986) and 28(5), 1146 (1987) § APPENDICES§.§ Proof of theorem <ref>(The non triviality of Φ_R)We consider the fundamentalsequence H_T0 (cf. definition <ref>) and verify successively the properties of Φ_R. Precisely:* [∀(q,Λ)∈(ℰ_(q)^4×^+*); H_T0^2 = (q^2+m^2)(1+δ_10(q,Λ)Δ_F );; δ_10(q,Λ)Δ_F=-ρ_0+ Λδ_3,min([N_3]̃ - [N_3]̃_(q^2+m^2) =0)Δ_F /1+ρ_0+Λ |a_0|;;lim_(q^2+m^2)= 0H_T0^2(q,Λ)Δ_F(q)=1(lim_(q^2+m^2)= 0δ_10(q,Λ)Δ_F =0);; [N_3]̃Δ_F∼_q→∞^log(q^2+m^2); H_T0^2(q,Λ) ≤ (‖ q‖^2+m^2)^(1+π^2/18);;H^2_min(q,)≤ H_T0^2(q,Λ), H^2_min(q)=q^2+m^2 ] ▪ * Moreover we verifythat for everyn=2k+1, k∈^*and ∀ (q,Λ)∈(ℰ_(q)^4n×^+*) the functions H_T0^n +1 [H_T0^4 = -δ_3,min(Λ)∏_l=1,2,3H^2_T0(q_l)Δ_F(q_l); H^n+1_T0(q,Λ) = δ_n, min(Λ)C^n+1 _T0(q,Λ)3Λn (n-1);; ](with{δ_n,min}_n≥ 3 the splitting sequenceof definition <ref>),belong to the class𝒜_4n^(α_nβ_n) of Weinberg functions with corresponding asymptoticindicatricesgiven by:∀S⊂ℰ_(q)^4nα_n(S)={[-(n-3) S⊄𝒦er λ_n;0S⊂𝒦er λ_n; β_n(S)=ν_(n)=2n ∀S⊂ℰ_(q)^4n; ]} * Trivially the properties <ref>, <ref> <ref> are satisfied by the definition and bounds of the sequence {δ_n,min}_n≥ 3 the splitting sequenceof definition <ref>), [δ_3,min (Λ)∼_q→∞^Λ; lim_Λ→ 0δ_3,min(Λ)/Λ=6, ]and[ δ_n,min(q,Λ)∼_q→∞^Λ;; lim_Λ→ 0δ_n,min(Λ)/Λ=3n(n-1), ] Moreoverδ_n,min(Λ) < δ_n,max (Λ) and there isδ_∞lim_n→∞δ_n, max(Λ)<δ_∞ ∀ Λ∈^+*: * [ γ_0=1;a_0 = -δ_3,min[N_3]̃_(q^2+m^2) =0;;; ρ_0= Λδ_3,min∂/∂ q^2[N_3]̃_(q^2+m^2) =0;; ]So we trivially obtain that H_T0 also verifies the property 4.of Φ_R(cf. equations <ref>, <ref>, <ref>) for the renormalization constants and this allows to conclude. ▪. §.§ Proof of Proposition <ref> - (The properties of the global terms B^n+1)Let H_ν∈Φ_R.We first easilyestablish the following inequality ∀(q̃, Λ̃)∈(ℰ_(q)^12×]0, 0.05]): [ D_3,ν(H)=| B^4_ν|-| A^4_ν|/| H^4_ν|>0; D_3,min(H)=|B^4_min|-| A^4_max|/|H^4_max|>0 ]Then we show that: ∀(q̃, Λ̃)∈(ℰ_(q)^4n×]0, 0.05]) and n≥ 5 the sequence: {δ̃^B_n}_n=2k+1, k≥ 2= |B^n+1_min|/ n(n-1)|H^n+1_max| increases with increasing n.In other words we prove that:|B^n+1_min|/ n(n-1)|H^n+1_max|≥|B^n-1_min|/ (n-2)(n-3)|H^n-1_max| For furtherpurposes in our proof we shall use the following recurrence hypothesis which is valid in the first step i.e. for n=5n-2=3 when Λ≤ 0,05 ∀n̅≤ n-2, [N_2|H^n̅+1_min|]/|H^n̅+1_max|≥ [N_2|H^n̅-1_min|]/|H^n̅-1_max|Now by using definitions <ref>, we require instead of <ref> the following condition:∑_ϖ_n(J) N_2|H^j_2+2_min|][N^(j_1)_1|H^j_1+1_min|]/ n(n-1)|H^n+1_max|≥∑_ϖ_n-2(J) N_2|H^j_2+2_min|[N^(j_1)_1|H^j_1+1_min|]/ (n-2)(n-3)|H^n-1_max|Notice that∀ n≥ 5, we canbound the left hand side sum and respectively the right hand side sumby their dominant contribution as follows:∑_ϖ_n(J) N_2|H^j_2+2_min|N^(j_1)_1|H^j_1+1_min|≥n(n-1)/4[N_2|H^n+1_min|]H^2_minΔ _F and respectively: ∑_ϖ_n-2(J) N_2|H^j_2+2_min|N^(j_1)_1|H^j_1+1_min|≥(n-2)(n-3)/4[N_2|H^n-1_min]|H^2_minΔ _FThen, condition <ref> becomes: [N_2|H^n+1_min|]/|H^n+1_max|≥[N_2|H^n-1_min|]/|H^n-1_max| By application of definition <ref> the previous condition takessuccessively the following forms: δ_n,min [N_2|C^n+1_min|]/δ_n,max |C^n+1_max|≥δ_(n-2),min [N_2|C^n-1_min|]/δ_(n-2),max |C^n-1_max| or equivalently, by using definitions<ref> and proposition <ref> of the tree terms, δ_n,min𝒯̃ _n [N_2|H^n-1_min|]/δ_n,max𝒯_n |H^n-1_max|≥δ_(n-2),min𝒯̃ _n-2[N_2|H^n-3_min|]/δ_(n-2),max𝒯_n-2 |H^n-3_max|where:[ 𝒯_n=(n-3)^2/48+(n-3)/3+1; ; 𝒯̃ _n=(n-3)^2/48 ] Then, by using the recurrence hypothesis<ref> ofñ=n-2 and definitions <ref> we obtain the final equivalent form of condition <ref>: [1+3Λ(n-2)(n-3)][1+ n(n-1)d_0](n-3)^2 [(n-5)^2/48+(n-5)/3+1]/ 1+3Λ n(n-1)][1+(n-2)(n-3)d_0] (n-5)^2 [(n-3)^2/48+(n-3)/3+1]≥ 1§.§ Proof of Proposition <ref> The properties of the global terms A^n+1 We show that: ∀(q̃, Λ̃)∈(ℰ_(q)^4n×]0, 0.05]) the sequence{δ̃^A_n}_n=2k+1, k≥ 3= |A^n+1_max|/ n(n-1|H^n+1_max| decreases with increasing n. In other words we prove that:|A^n+1_max|/ n(n-1)|H^n+1_max|≤|A^n-1_max|/ (n-2)(n-3)|H^n-1_max| As before, by application of definitions <ref>,and proposition <ref> of the tree termswe have:[|A^n+1_max|=Λ N_3|H^n+3_max|; |A^n-1_max|=Λ N_3|H^n+1_max|;; Λ N_3|H^n+3_max|/ n(n-1)|H^n+1_max|≤Λδ_n+2,max𝒯_n+2 N_3|H^n+1_max|/ n(n-1)δ_n,max𝒯_n|H^n-1_max| ] By comparison with the condition <ref>the following functionf_d_1(n) should be smaller than 1f_d_1(n)=δ_n+2,max𝒯_n+2(n-2)(n-3)/δ_n,max𝒯_nn(n-1) ≤ 1 f_d_1(n)=(n+1)(n+2)[1+ n(n-1)d_0] [(n-1)^2/48+(n-1)/3+1](n-2)(n-3)/ n(n-1)[1+(n+1)(n+2)d_0] [(n-3)^2/48+(n-3)/3+1]n(n-1) ≤ 1By giving to the numericalconstant d_0 different values in the interval [0.02 , 0,45] and after long numerical calculations we can find that the difference between the denominator and numerator is always positive.For the values of n (=x continuous) in the interval ]7, 200] the function f_d_0(n) increases continuously(with positive values always smaller than 1) up to the limit value of 1.Notice that as far as the |H^n+1_max|'swith n≥ 5are concerned, the decrease behaviour (with respect the external momenta(i.e. α(S)≤ -2) allows us to take the bounds numerically (at zero external momenta). §.§ Proof of the local contractivity of the mapping M^* or the convergence of the Φ^4_4 iteration inside Sr(0)(theorem <ref>)By the definition <ref> of the norm 𝒩 theinequalities <ref> and <ref> are equivalent to the following: ∀ Λ∈ ]0, 0.04][ sup_(n, q,Λ){| H^n+1_ν- H^n+1_T0|/M_n; |∂^(0,1)/∂ q^2N_3(H^4_ν-H^ 4_T0)|/M̂_(3)^(0,1);| N_2(H^n+1_ν-H^ n+1_T0)| /M̂_(n,2);|γ_ν-γ_0|/N_γ}; ≤ k^(0)(Λ)r_0k^(0) (Λ)<1 ] [ sup_(n, q,Λ){| H^n+1_ν- H^n+1_ν-1|/ M_n;|∂^(0,1)/∂ q^2N_3(H^4_ν-H^ 4_ν-1)|/M̂_3^(0,1);| N_2(H^n+1_ν-H^ n+1_ν-1)| /M̂_(3,2);|γ_ν-γ_ν-1|/N_γ}; ≤ K^ν (Λ)‖ H_ν-1- H_ν-2‖; K^ν (Λ)<1;; k^(0)+K^ν<1 ]* Proof of <ref>Notice that the condition <ref> is stronger than the required <ref>(first order image by M^* of H_T_0).We first obtain the corresponding boundsfor ν=1.We start from n=1, n=3 and generalize recurrently for every n≥ 5 . Then we apply the same procedure for every ν≥ 2. * a) Let n=1 | H^2_1- H^2_T0|/M_1≤Λ| N_3(H^4_max-H^4_min)|Δ_F/ M_1then using the definition <ref> of the ball S_r(0)(H_T0), the norm definition <ref> and proposition<ref> we finally obtain:[ | H^2_1- H^2_T0|/M_1≤ k^(0)_1,1 r(0); k^(0)_1,1=6Λ^2(q^2+m^2)^π^2/54(1+6Λ^2(q^2+m^2)^π^2/54)/1+6(q^2+m^2)^π^2/54 ] For n=3 we have:H^4_1 (qΛ) = -δ_3,(1)(q,Λ) ∏_ℓ=1,2,3H^2_(1)(q_ℓ,Λ)Δ_F(q_ℓ) ; so [ | H^4_1- H^4_T0|/M_3≤ {δ_3,max-δ_3,min}/M_3∏_ℓ=1,2,3H^2_(max)(q_ℓ,Λ)Δ_F(q_ℓ)+;+3δ_3,max| H^2_1- H^2_T0|/M_3∏_ℓ=1,2H^2_(max)(q_ℓ,Λ)Δ_F(q_ℓ) ]And again by thedefinition of norms, of r(0) and the previous result for n=1 we obtain:[ | H^4_1- H^4_T0|/M_3≤ Λ r(0)(1+(18)^2Λ ^2 ) k^(0)_1,3=Λ(1+(18)Λ ^2 );] Nowfor every n≥ 5 and n̅≤ n-2 we suppose that we have established an analogous inequality, namely:[ | H^n̅+1_1- H^n̅+1_T0|/M_n≤ r(0)k^(0)_1,n̅; ;]By using definition <ref> of S_r(0)(H_T0), the norm definition <ref> the splitting properties, the bounds <ref> of the tree terms, and the recursionwe have successively: [| H^n+1_1- H^n+1_T0|/M_n≤{δ_n,max-δ_n,min}| C^n+1_max|/3Λ n(n-1)M_n +δ_n,max| C^n+1_1- C^n+1_T0|/3Λ n(n-1)M_n<;; < r(0)δ_n,max𝒯_nH^2_(max)/n(n-1)δ_n,maxM_1{| H^n-1_max| H^2_(max)/M_n-2M_1+ k^(0)_1,n-2H^2_(max)/M_1+2k^(0)_1,1| H^n-1_max|/M_n-2}<;; <r(0)(n-3)^2/48 n(n-1){| H^n-1_max|/M_n-2+ k^(0)_1,n-2H^2_(max)/M_1+2k^(0)_1,1| H^n-1_max|/M_n-2} ] In the last formula we used again the result of ref. <cit.> about the number 𝒯_nof different partitions inside the tree terms as we did in proposition <ref>. Now we note that for every n we have:| H^n+1_max|/M_n < | H^n-1_max|/M_n-2As a matter of fact by application of proposition <ref>and in particular the bounds <ref>,<ref> and the norm definition <ref>we can write:| H^n+1_max|/M_n<(n-3)^2| H^n-1_max|/48 n(n-1)M_n-2<| H^n-1_max|/48 M_n-2It then follows that:| H^n+1_max|/M_n<(n-3)^2| H^n-1_max|/48 n(n-1)M_n-2<| H^4_max|/48 M_3 <ΛFrom these results and the recurrent hypothesis k^(0)_1, n-2<k^(0)_1, 3 we have:[| H^n+1_1- H^n+1_T0|/M_n< r(0)/48{Λ+ k^(0)_1,3 +2k^(0)_1,1}; | H^n+1_1- H^n+1_T0|/M_n< r(0)k^(0)_1,n k^(0)_1,n = k^(0)_1,3/16 ] * b)In the case of ν≥ 2 we follow an analogous procedure and find similar results . We just notice that for n=1 the condition imposed on Λ in order that k^(0)_ν, 1<1 is stronger than the one ofevery k^(0)_ν, n<1 ,n≥ 3 (cf. figure <ref>). As a matter of fact at every order ν≥ 2 of theΦ_4^4-iteration the contributions coming from the values of the renormalization constants γ̃,ρ̃, ã become nontrivial.Precisely: [ | H^2_ν- H^2_T0|/M_1≤|ρ̃_ν -1-ρ_0| γ̃_0+ρ̃_0|+|γ̃_ν -1- γ̃_0|ρ̃_0/|γ̃_ν -1+ρ̃_ν -1|||γ̃_0+ρ̃_0| | M_1;;+ Λ| N_3H^4_ν -1(γ̃_0+ρ̃_0)-N_3H^4_min(γ̃_ν -1+ρ̃_ν -1)|Δ_F/ |γ̃_ν -1+ρ̃_ν -1|||γ̃_0+ρ̃_0| | M_1;; +Λ| N_3H^4_ν -1(γ̃_0+ρ̃_0)-N_3H^4_min(γ̃_ν -1+ρ̃_ν -1)| _(q^2+m^2)=0 H^2_maxΔ_F/|γ̃_ν -1+ρ̃_ν -1|||γ̃_0+ρ̃_0| | M_1;; +Λ| H^2_max- H^2_min|| N_3H^4_max|Δ_F/|γ̃_ν -1+ρ̃_ν -1|||γ̃_0+ρ̃_0| | M_1 ]Then as before by taking into account thenorm definition <ref> and def.<ref> of S_r(0)(H_T0) and r(0) we first have:[ |γ̃_ν -1- γ̃_0|≤ r(0) (1+9Λ(1+6Λ^2)); |ρ̃_ν -1- ρ̃_0|≤Λ r(0)M̂_3^(0,1);|ã_ν -1-ã_0|≤Λ r(0)M̂_3^(0,1) ]and finally (after some trivial estimations): [ | H^2_ν- H^2_T0|/M_1≤k_ν,1 r(0) k_ν,1(Λ)=48Λ^2(1+10Λ); k_ν,1(Λ)<1:Λ≤ 0.1 ]Now as before, ∀ n≥ 3 we estimate the following bounds :[ | H^4_ν- H^4_T0|/M_3≤ Λ r(0)(1+ 144Λ ^2 (1+10Λ));k^(0)_ν,3=Λ(1+ 144Λ ^2 (1+10Λ));] and similar results for:| N_3(H^4_ν-H^4_T0)|/M̂_3^(0,1) |∂/∂ q^2N_3(H^4_ν-H^ 4_T0)|/M̂_3^(0,1)Moreoverwe findagain recurrently, using the same arguments as when ν=1that for all n≥ 5:k^(0)_ν,n< k^(0)_ν, 3 < k^(0)_ν,1 =48Λ^2(1+10Λ) Conclusion:[ ∀ν ≥ 2 ,‖H_ν-H_T0‖ ≤ k^(0)(Λ)r(0);k^(0)(Λ)=48Λ^2(1+10Λ)<1 ∀ Λ≤ 0.1 ]* Proof of <ref> The first step ν= 2 being easily verified, we suppose that for all ν̅≤ν-1the inequality<ref> is verified. * a)Let n=1 byusing proposition <ref> we write:[| H^2_ν- H^2_ν-1|/M_1≤|ρ̃_ν -1-ρ_ν -2| |γ̃_ν -1 +2(ρ_ν -1+ρ_ν -2) |+|γ̃_ν -1- γ̃_ν -2|ρ̃_ν -1/|γ̃_0+ρ̃_0|^2 | M_1; ; + Λ| N_3H^4_ν -1(γ̃_ν -2+ρ̃_ν -2)-N_3H^4_ν -2(γ̃_ν -1+ρ̃_ν -1)|Δ_F/ |(γ̃_ν -1+ρ̃_ν -1)(γ̃_ν -2+ρ̃_ν -2)| M_1; ; +Λ| N_3H^4_ν -1(γ̃_ν -2+ρ̃_ν -2)-N_3H^4_ν -2(γ̃_ν -1+ρ̃_ν -1)| _(q^2+m^2)=0 H^2_ν -1Δ_F/|(γ̃_ν -1+ρ̃_ν -1)(γ̃_ν -2+ρ̃_ν -2)| M_1; ; +Λ| H^2_ν-1- H^2_ν-2|| N_3H^4_ν-2|_q^2+m^2=0Δ_F/|(γ̃_ν -1+ρ̃_ν -1)(γ̃_ν -2+ρ̃_ν -2)| M_1 ] Then using the norm definition <ref> and the definitionsof the renormalization constants (cf.proposition <ref>) we first have:[ |γ̃_ν -1- γ̃_ν -2|≤‖ H_ν-1- H_ν-2‖ N_γ(Λ)|_q^2=0; |ρ̃_ν -1- ρ̃_ν -2|≤Λ‖ H_ν-1- H_ν-2‖M̂_3^(0,1)|_q^2+m^2=0 ] Then, after some elementary estimations the first term of the R.H.S. of <ref> yields:[ O_1/M_1=|ρ̃_ν -1-ρ_ν -2| γ̃_ν -2 +|γ̃_ν -1- γ̃_ν -2|ρ̃_ν -2/|γ̃_0+ρ̃_0|^2 | M_1≤;≤ K_1^ν,1(Λ)‖ H_ν-1- H_ν-2‖; ;K_1^ν,1(Λ)=12Λ <1 Λ≤ 0.08 ] Wetake the sum of the second and third term of<ref> and call it O_2.we obtain:[ O_2/ M_1≤ 2Λ| N_3H^4_ν -1(γ̃_ν -2+ρ̃_ν -2)-N_3H^4_ν -2(γ̃_ν -1+ρ̃_ν -1)| H^2_maxΔ_F/(γ̃_ν -1+ρ̃_ν -1)M_1; ≤2Λ{| N_3H^4_ν -1-N_3H^4_ν -2|H^2_max,ν-1Δ_F/(γ̃_ν -1+ρ̃_ν -1) M_1}+; + 2Λ H^2_max,ν-1Δ_F{| N_3H^4_ν -2,max||γ̃_ν -1-γ̃_ν -2|+|ρ̃_ν -1-ρ̃_ν -2|/(γ̃_ν -1+ρ̃_ν -1)(γ̃_ν -2+ρ̃_ν -2)M_1}; ; ;O_2/ M_1≤ K_1^ν,2(Λ)‖ H_ν-1- H_ν-2‖;K_1^ν,2(Λ)= 12Λ H^2_maxΔ_F| N_3H^4_max|M̂_3/M_1<12Λ; Λ≤0.08 K_1^ν,2<1 ] The last term of <ref> that we callO_3 yields:[ O_3/M_1≤Λ| H^2_ν-1- H^2_ν-2|| N_3H^4_ν-2|_q^2+m^2=0Δ_F/|(γ̃_ν -1+ρ̃_ν -1)(γ̃_ν -2+ρ̃_ν -2)| M_1≤; ; ≤ K_1^ν,3(Λ)‖ H_ν-1- H_ν-2‖ K_1^ν,3(Λ)= 6Λ^2 ]By using the corresponding bounds from <ref>, <ref> <ref> we obtain: [ | H^2_ν- H^2_ν-1|/M̃_1<K_1^ν(Λ) ‖ H_ν-1- H_ν-2‖; K_1^ν(Λ)= K_1^ν,1+K_1^ν,2+K_1^ν,3=6Λ(4+Λ);; K_1^ν<1 ∀ Λ≤ 0.04 ] * b) Let n=3 we write: | H^4_ν- H^4_ν-1|/M_3≤ A+Bwhere:[ A=δ_3,ν∏_ℓ=1,2,3| H^2_ν(q_ℓ,Λ)- H^2_ν-1(q_ℓ,Λ)|Δ_F(q_ℓ)/M_3 ]and B=|δ_3,ν-δ_3,ν-1|∏_ℓ=1,2,3H^2_ν-1(q_ℓ,Λ)Δ_F(q_ℓ)/M_3i)Byusing again proposition <ref> the norm definition and the previous result <ref> of H^2_ν we obtain:[ A≤ 3δ_3,max K_1^νM_1‖ H_ν-1- H_ν-2‖/M_3∏_ℓ=2,3H^2_(max)(q_ℓ,Λ)Δ_F(q_ℓ); A≤ K^ν_3,A(Λ)‖ H_ν-1- H_ν-2‖; ; K^ν_3,A(Λ)=3K_1^ν/D̃_3,min∏_l=2,3H^2_(max)(q_ℓ,Λ)/M_1(q_l) ] D̃_3,min=1+ρ_0+Λ|a_0| +0,18Λ Now by application of the norm definition (<ref>) of M_1 and proposition <ref> of the definition of H^2_max, we obtain a best estimation of the ratio H^2_max/M_1: [ H^2_max/M_1≤ 1-6(q^2 +m^2)^π^2/54(1-Λ^2)/1+6(q^2 +m^2)^π^2/54; H^2_max/M_1∼ 6Λ^2 ]and so: K^ν_3,A(Λ)< 108Λ^4K_1^ν<0.7 K_1^ν(∀Λ<0.08) ii) As far as theterm B of the r.h.s. of <ref> is concerned we use the same arguments as before and we obtain: |δ_3,ν-δ_3,ν-1|∏_ℓ=1,2,3H^2_ν-1(q_ℓ,Λ)Δ_F(q_ℓ)/M_3≤ (B.1) +(B.2)+(B.3)with:ii.1)[(B.1)≤; ≤∏_ℓ=1,2,3 H^2_ν-1(q_ℓ )/M_1(q_ℓ) |γ̃_ν -1- γ̃_ν -2| + |ρ̃_ν -1-ρ_ν -2|+|ã_ν -1- ã_ν -2|/D̃_3,ν-1D̃_3,ν-2;;; D̃_3,ν-1=γ̃_ν -1+ρ̃_ν -1+D_3,ν-1-Λã_ν-1;;;(B.1)≤ K^ν_3,B.1(Λ)‖ H_ν-1- H_ν-2‖; K^ν_3,B.1=∏_l=1,2,3H^2_(max)(q_ℓ)/M_1(q_l)γ_max +2ΛM̂_3^(0,1)/(1+ρ_0+Λ|a_0| +0,18Λ)^2;K^ν_3,B.1≤ (6Λ)^3 (1+9Λ);K^ν_3,B.1(Λ)<1, ∀Λ≤ 0.05 ] ii.2)[(B.2)≤; ≤ 6Λ∏_ℓ=1,2,3 H^2_ν-1(q_ℓ )Δ_F(q_ℓ) |H^4_ν-1-H^4_ν-2|B^4_ν-2-A^4_ν-2|/M_3|H^4_ν-1||H^4_ν-2|D̃_3,ν-1D̃_3,ν-2 ][ |H^4_ν-1|=6Λ∏_ℓ=1,2,3 H^2_ν-1(q_ℓ )Δ_F(q_ℓ)/D̃_3,ν-2;|B^4_ν-2-A^4_ν-2|/|H^4_ν-2|=D_3,ν-2 ][ (B.2) ≤ K^ν_3,B.2(Λ)‖ H_ν-1- H_ν-2‖;; K^ν_3,B.2=D_3,max/D̃_3,min=9Λ(1+6Λ^2)/(1+ρ_0+Λ|a_0| +0,18Λ); K^ν_3,B.2(Λ)<1, ∀Λ≤ 0.1 ] ii.3) [(B.3)≤; < 6Λ∏_ℓ=1,2,3 H^2_ν-1(q_ℓ )Δ_F(q_ℓ) |B^4_ν-1-B^4_ν-2| /M_3|H^4_ν-1|D̃_3,ν-1D̃_3,ν-2;;(B.3)< 9Λ/M_3D̃_3,ν-1{| N_2(H^4_ν -1-H^4_ν -2)|H^2_maxΔ_F}; + 9Λ/M_3D̃_3,ν-1{| N_2H^4_max||H^2_ν-1-H^2_ν-2|Δ_F}; ]Notice that we have used the sign properties of B^4 and A^4 together with the definitions of the mapping(cf. in particular equation <ref>). Then, by application of the norm definitions we obtain:[ (B.3) < K^ν_3,B.3(Λ)‖ H_ν-1- H_ν-2‖;; K^ν_3,B.3=18Λ H^2_max/D̃_3,minM_1; K^ν_3,B.3(Λ)<1,; ∀Λ≤ 0.2;K^ν_3,B.3(Λ)<1, ∀Λ≤ 0.05 ]Finally taking into account <ref>, <ref>, <ref>, <ref> and <ref> we obtain that K^ν _3<1∀Λ≤ 0.05▪ * c) Under weaker conditions on Λand using the analogous procedure, (norm definitions, together with properties in Φ_R etc...) we find that: c.i) that there is a positve constant continuous function of Λ, K_γ^ν(Λ)<1 [|γ̃_ν-γ̃_ν-1|/N_γ≤ K_γ^ν (Λ)‖ H_ν-1- H_ν-2‖;; K_γ^ν (Λ)=3K_1^νγ_maxM_1^-2|_q^2=0+K_3^νγ_max^2<1; ] c.ii)[ |ρ̃_ν-ρ̃_ν-1|/M_3^(0,1)≤Λ|∂/∂ q^2N_3(H^4_ν-H^ 4_ν-1)|/M̂_3^(0,1)≤ K_ρ^ν (Λ)‖ H_ν-1- H_ν-2‖;; K_ρ^ν =Λ K_3^ν < 1Λ ] c.iii)|ã_ν-ã_ν-1|/M_3^(0,1)≤ K_3^ν(Λ)‖ H_ν-1- H_ν-2‖ c.iv) By using the basic splitting properties in Φ_R of H^4 and by analogous arguments as above we show that [ ; |N_2H_ν^4|≤ 6Λ [N_2^3]̃ ∏_l=1^2M_1(q_l); | N_2(H^4_ν-H^ 4_ν-1)| /M̂_(3,2)≤ K_3^ν(Λ)‖ H_ν-1- H_ν-2‖ ]Finally:[ sup_( q,Λ){|∂^(0,1)/∂ q^2N_3(H^4_ν-H^ 4_ν-1)|/M̂_3^(0,1);| N_2(H^4_ν-H^ 4_ν-1)| /M̂_(3,2);|γ_ν-γ_ν-1|/N_γ};≤ K_3,2,γ^ν (Λ)‖ H_ν-1- H_ν-2‖;; K_3,2,γ^ν (Λ)<1▪ * d)Let n≥ 5 We suppose that for every ñ≤ n-2 the first property of<ref> is verified in the following sense: [ ∀Λ≤ 0.05 ,; ; sup_( q,Λ){| H^ñ+1_ν- H^ñ+1_ν-1|/ M_ñ}≤ K^ν_ñ(Λ) ‖ H_ν-1- H_ν-2‖;K^ν_ñ(Λ) <K^ν_ñ-2(Λ)≤ …≤ K^ν_3(Λ) ]We show this property for ñ= n by using again thedefinitions of the norm(<ref>), of the mapping M^* (cf.proposition <ref>) and the properties in Φ_R (proposition<ref>) . [| H^n+1_ν- H^n+1_ν-1|/M_n≤A_n+B_n; A_n=δ_n,ν| C^n+1_ν - C^n+1_ν-1|/3Λ n(n-1)M_n;B_n=|δ_n,ν-δ_n,ν-1||C^n+1_ν-1|/3Λ n(n-1)M_n ] i) [ A_n=δ_n,ν| C^n+1_ν - C^n+1_ν-1|/3Λ n(n-1)M_n≤; ≤K^ν_ n-2∏_l= 2,3 T_nH^2_(max)(q_ℓ,Λ)/n(n-1)M_1(q_l)‖ H_ν-1- H_ν-2‖ +;+2K^ν_ 1 T_nH^n-1_(max)(q_(n-2))/n(n-1)M_n-2(q_(n-2))‖ H_ν-1- H_ν-2‖;; A_n≤ K^ν_n,A‖ H_ν-1- H_ν-2‖; K^ν_n,A(Λ)=H^2_(max)/24M_1{K^ν_ n-2H^2_(max)/M_1+K^ν_ 1|H^n-1_(max)|/M_n-2} ]We easily verify that under a weaker condition than Λ≤ 0.05 K^ν_n,A ≤ K^ν_n-2,A<1 ▪ii) For theterm B_n of <ref> we use the same arguments as before and we obtain:B_n= |δ_n,ν-δ_n,ν-1||C^n+1_ν-1|/3Λ n(n-1)M_n ≤ |B_n,1|+|B_n,2|+|B_n,3|with:ii.1) [ |B_n,1|≤; ≤| C^n+1_ν-1||γ̃_ν -1- γ̃_ν -2| + |ρ̃_ν -1-ρ_ν -2|+|ã_ν -1- ã_ν -2|/ M_nD̃_n,ν-1D̃_n,ν-2 ]By the definitions of the norm(<ref>) of the mapping M^* (cf.prop. <ref>) and the properties in Φ_R (proposition<ref>) we obtain :[D̃_n,ν-1=γ̃_ν -1+ρ̃_ν -1+D_n,ν-1-Λ |ã_ν-1|;; |B_n,1|≤| H^n+1_ν-1|‖ H_ν-1- H_ν-2‖γ_max +2ΛM̂_n^(0,1)/ M_nD̃_n,ν-1; |B_n,1|≤ K^ν_n,B.1‖ H_ν-1- H_ν-2‖; K^ν_n,B.1= ≤∏_l= 2,3H^2_(max)(q_ℓ)(n-3)^2|H^n-1_max|(γ_max +2ΛM̂_n^(0,1))/M_1(q_l)24n(n-1)M_n-2D̃_3,min ]Now using the evident bound:|H^n-1_max|/M_n-2 ≤∏_l= 2,3H^2_(max)(q_ℓ) |H^n-3_max|/M_1(q_l)24 M_n-4and the analogous definition <ref> of K^ν_n-2,B.1we have:[ K^ν_n,B.1≤∏_l= 2,3H^2_(max)(q_ℓ) (n-3)^2 (n-2(n-3)/M_1(q_l)24n(n-1)(n-5)^2K^ν_n-2,B.1;⇔K^ν_n,B.1 < K^ν_n-2,B.1 <1▪ ]ii.2) By analogy to n=3 [ |B_n,2|≤; ≤| C^n+1_ν-1|/ M_nD̃_n,ν-1D̃_n,ν-2|H^n+1_ν-1-H^n+1_ν-2|B^n+1_ν-2-A^n+1_ν-2|/|H^n+1_ν-1||H^n+1_ν-2|; ; |B_n,2|≤ K^ν_n, B.2‖ H_ν-1- H_ν-2‖; K^ν_n,B.2=D_n,max/D̃_n,min∼D_n-2,max/D̃_n-2,min<1 ∀Λ≤ 0.05 ▪ ]ii.3) Thenby an analogous as above procedure we have: [ |B_n,3|≤;< | C^n+1_ν-1|/ M_nD̃_n,ν-1D̃_n,ν-2|B^n+1_ν-1-B^n+1_ν-2| /|H^n+1_ν-1|; ; |B_n,3|< 3Λ n(n-1)/2M_nD̃_n,ν-1{| N_2(H^n+1_ν -1-H^n+1_ν -2)|H^2_maxΔ_F}; +3Λ n(n-1)/2M_nD̃_n,ν-1{| N_2H^n+1_max||H^2_ν-1-H^2_ν-2|Δ_F}; ;|B_n,3|<K^ν_n, B.3‖ H_ν-1- H_ν-2‖;K^ν_n,B.3= δ_∞ [N_2^n]̃_q=0H^2_max/2M_1 (1+ T_n/n(n-1));K^ν_n, B.3∼ K^ν-2_n,B.3<1,∀ Λ≤ 0.05 ▪ ] Finally by addition of <ref>, <ref>, <ref>, <ref>, <ref> and <ref> we obtain the proof of the recursion <ref> and by using also the result <ref> the proof of the contractivity criterium <ref> follows. ▪*Byusing again proposition <ref> the norm definition and the previous result <ref> of H^2_ν we obtain:[ A≤ 3δ_3,max K_1^νM_1‖ H_ν-1- H_ν-2‖/M_3∏_ℓ=2,3H^2_(max)(q_ℓ,Λ)Δ_F(q_ℓ); A≤ K^ν_3,A(Λ)‖ H_ν-1- H_ν-2‖; ; K^ν_3,A(Λ)=3K_1^ν/D̃_3,min∏_l=2,3H^2_(max)(q_ℓ,Λ)/M_1(q_l) ]D̃_3,min=1+ρ_0+Λ|a_0| +0,18ΛNow by application of the norm definition (<ref>) of M_1 and proposition <ref> of the definition of H^2_max, we obtain a best estimation of the ratio H^2_max/M_1: [ H^2_max/M_1≤ 1-6(q^2 +m^2)^π^2/54(1-Λ^2)/1+6(q^2 +m^2)^π^2/54; H^2_max/M_1∼ 6Λ^2 ]and so: K^ν_3,A(Λ)< 108Λ^4K_1^ν<0.7 K_1^ν(∀Λ<0.08)*As far as theterm B of the r.h.s. of <ref> is concerned we use the same arguments as before and we obtain: |δ_3,ν-δ_3,ν-1|/M_3≤ (B.1) +(B.2)+(B.3)*[(B.1)≤; ≤ 6Λ∏_ℓ=1,2,3 H^2_ν-1(q_ℓ )Δ_F(q_ℓ) |γ̃_ν -1- γ̃_ν -2| + |ρ̃_ν -1-ρ_ν -2|+|ã_ν -1- ã_ν -2| /M_3D̃_3,ν-1D̃_3,ν-2;;; D̃_3,ν-1=γ̃_ν -1+ρ̃_ν -1+D_3,ν-1-Λã_ν-1;;; (B.1) ≤ K^ν_3,B.1(Λ)‖ H_ν-1- H_ν-2‖; K^ν_3,B.1=∏_l=1,2,3H^2_(max)(q_ℓ,Λ)/M_1(q_l)γ_max +2ΛM̂_3^(0,1)/(1+ρ_0+Λ|a_0| +0,18Λ)^2;K^ν_3,B.1≤ (6Λ)^3 (1+9Λ) ]*(B.2) ≤ 6Λ∏_ℓ=1,2,3 H^2_ν-1(q_ℓ )Δ_F(q_ℓ) |H^4_ν-1-H^4_ν-2|B^4_ν-2-A^4_ν-2|/M_3|H^4_ν-1||H^4_ν-2|D̃_3,ν-1D̃_3,ν-2[ |H^4_ν-1|=6Λ∏_ℓ=1,2,3 H^2_ν-1(q_ℓ )Δ_F(q_ℓ)/D̃_3,ν-2;|B^4_ν-2-A^4_ν-2|/|H^4_ν-2|=D_3,ν-2 ][ (B.2) ≤ K^ν_3,B.2(Λ)‖ H_ν-1- H_ν-2‖;; K^ν_3,B.2=D_3,max/D̃_3,min=9Λ(1+6Λ^2)/(1+ρ_0+Λ|a_0| +0,18Λ); K^ν_3,B.2(Λ)<1, ∀Λ≤ 0.1 ]* [ (B.3) < 6Λ∏_ℓ=1,2,3 H^2_ν-1(q_ℓ )Δ_F(q_ℓ) |B^4_ν-1-B^4_ν-2| /M_3|H^4_ν-1|D̃_3,ν-1D̃_3,ν-2;;(B.3)< 9Λ/M_3D̃_3,ν-1{| N_2(H^4_ν -1-H^4_ν -2)|H^2_maxΔ_F}; + 9Λ/M_3D̃_3,ν-1{| N_2H^4_max||H^2_ν-1-H^2_ν-2|Δ_F}; ]Notice that we have used the sign properties of B^4 and A^4 together with the definitions of the mapping(cf. in particular equation <ref>). Then, by application of the norm definitions we obtain:[ (B.3) < K^ν_3,B.3(Λ)‖ H_ν-1- H_ν-2‖;; K^ν_3,B.3=18Λ H^2_max/D̃_3,minM_1; K^ν_3,B.3(Λ)<1,; ∀Λ≤ 0.2;K^ν_3,B.3(Λ)<1, ∀Λ≤ 0.05 ] FIN de LA REPETITION Proofof <ref> [ sup_n, q,Λ{ M_n^-1(q,Λ)| H^n+1_ν-| H^n+1_ν-1| ;M̃_3^-1| N_3(H^4_ν-H^4_ν-1)| ; M̅_3^-1|∂/∂ q^2[N_3(H^4_ν-H^4_ν-1)|}; sup_(n,Λ)| H_(0)^n+1^'-H_(0)^n+1|/M_n≤k^(0)(Λ) ρ ] Precisely: [ | H^2_ν- H^2_T0|/M_1≤|ρ̃_ν -1-ρ_0| γ̃_0+ρ̃_0|+|γ̃_ν -1- γ̃_0|ρ̃_0/|γ̃_ν -1+ρ̃_ν -1|||γ̃_0+ρ̃_0| | M_1;;+ Λ| N_3H^4_ν -1(γ̃_0+ρ̃_0)-N_3H^4_min(γ̃_ν -1+ρ̃_ν -1)|Δ_F/ |γ̃_ν -1+ρ̃_ν -1|||γ̃_0+ρ̃_0| | M_1;; +Λ| N_3H^4_ν -1(γ̃_0+ρ̃_0)-N_3H^4_min(γ̃_ν -1+ρ̃_ν -1)| _(q^2+m^2)=0 H^2_maxΔ_F/|γ̃_ν -1+ρ̃_ν -1|||γ̃_0+ρ̃_0| | M_1;; +Λ| H^2_max- H^2_min|| N_3H^4_max|Δ_F/|γ̃_ν -1+ρ̃_ν -1|||γ̃_0+ρ̃_0| | M_1 ]Then as before by taking into account thedefinition of norms and r(0) we first have:[ |γ̃_ν -1- γ̃_0|≤ r(0) (1+9Λ(1+6Λ^2));|ρ̃_ν -1- ρ̃_0|≤Λ r(0) M̂_3; |ã_ν -1-ã_0|≤Λ r(0) M̃_3 ]and finally (after some trivial estimations): [ | H^2_ν- H^2_T0|/M_1≤k_ν,1 r(0) k_ν,1(Λ)=48Λ^2(1+10Λ); k_ν,1(Λ)<1:Λ≤ 0.1 ] Now as before, ∀ n≥ 3 we estimate the following bounds :[ | H^4_ν- H^4_T0|/M_3≤ Λ r(0)(1+ 144Λ ^2 (1+10Λ));k^(0)_ν,3=Λ(1+ 144Λ ^2 (1+10Λ));] and similar results for:| N_3(H^4_ν-H^4_T0)|/M̃_3 |∂/∂ q^2N_3(H^4_ν-H^ 4_T0)|/M̂_3^-1Moreoverwe findagain recurrently, using the same arguments as when ν=1that for all n≥ 5:k^(0)_ν,n< k^(0)_ν, 3 < k^(0)_ν,1 =48Λ^2(1+10Λ) Conclusion:∀ν,‖H_ν-H_T0‖ ≤ k^(0)(Λ)r(0)k^(0)(Λ)=48Λ^2(1+10Λ) Proof of theorems <ref>, <ref>,andpropositions <ref>, <ref>, <ref>. <ref> * Proof of theorem <ref> Let H∈Φ_R then ∀Λ≤ 0.05: i) For n=1 the positivity of H^2follows immediately from the bounds (cf.<ref>): H^2_min(q, Λ)≤ H^2 (q,Λ) ≤ H^2_max(q,Λ )ii) Forn=2k+1 k∈^* we consider theglobal term“(`Φ_4^4 operation”) C^n+1(q,Λ) = - 6Λ∑_ϖ_n(I)∏_l=1,2,3 N^(i_l)_1H^i_l+1 (q_i_l,Λ)a.Starting from n=3with H^4(q,Λ)=-δ_3(q,Λ) ∏_ℓ=1, 2, 3H^2(q_i,Λ)Δ_F(q_i) < 0(in view of i) the positivity of H^2)the sign property of H^4 is established. Then, by applying recurrently the sign property of every H^ñ+1 point function with ñ≤ n-2 (cf. property iii) of the theorem), we obtain for every term in the sum of C^n+1: ∀ i_l≤ n-2 , H^i_l +1=(-1)^i_l -1/2|H^i_l +1|so the corresponding product verifies: ∏_l=1,2,3 N^(i_l)_1H^i_l+1 (q_i_lΛ)=(-1)^(n-3)/2)∏_l=1,2,3 N^(i_l)_1 (q_i_l)|H^i_l+1| andC^n+1 =- 6Λ∑_ϖ_n(I)∏_l=1,2,3 N^(i_l)_1H^i_l+1 (q_i_l,Λ)= (-1)^(n-1)/2|C^n+1|▪b. The proof is obtained by application of theorem <ref> (ii).c. The proof is obtained by application of proposition <ref>. d. We start from n=3 and by using the definitions <ref>and <ref>, weestablich recurrently the bounds of the tree terms C^n+1 for every n≥ 5 iii) From the sign property established before forn=3 (cf.ii)a)) we have recurrenly for n ≥ 3the splitting property:H^n+1(q,Λ)=δ_n(q,Λ)C^n+1(q,Λ)/3Λ n(n-1) with δ_n(q,Λ)>0, because for fixed q̃∈ℰ_(q)^4nand ∀Λ∈^+* δ_n,min(Λ) ≤δ_n(q̃, Λ)≤δ_n,max (Λ)Here{δ_n,min}, {δ_n,max} are the positive splitting sequences lower and upper bounds of thesolution of the zero dimensional problempreviously presented by definition <ref>). Finally the good sign property ofthe tree terms ends the proof. iv) The proof is again obtained by using the definitions of |verifies the following properties:a. The “good sign” property:∀ n=2k+1(k≥ 1) C^n+1=(-1)^n-1/2|C^n+1|b. Itis a R.Φ.C. in the senseof definition <ref> consequently it verifies Euclidean invariance and linear axiomaric field theory properties.c. For every n=2k+1, k≥ 1the function C^n+1(q, Λ) , belongs to the class𝒜_4n^(α_nβ_n) of Weinberg functions such that ∀S⊂ℰ_(q)^4n the corresponding asymptoticindicatrices are given by:α_n(S)={[ -(n-3), S⊄𝒦er λ_n; 0 S⊂𝒦er λ_n ]}β_n(S)={[2(n+1) S⊄𝒦er λ_n; 0 S⊂𝒦er λ_n ]}d) For every n=2k+1, k∈^* |C^n+1_min(q, Λ)|≤| C^n+1(q,Λ)|≤|C^n+1_max(q, Λ)iii) [ ∀ n=2k+1(k≥ 1) H^n+1=(-1)^n-1/2|H^n+1| ]iv) ∀n=2k+1; k∈ N|H^n+1_min(q, Λ)|≤| H^n+1(q,Λ)|≤| H^n+1_max(q, Λ)|Let H∈Φ_R. i)For n=1 the positivity of H^2follows immediately from the bounds (cf.<ref>): H^2_min(q, Λ)≤ H^2 (q,Λ) ≤ H^2_max(q,Λ ) i) b) For every n ≥ 3 we have the splitting property:H^n+1(q,Λ)=δ_n(q,Λ)C^n+1(q,Λ)/3Λ n(n-1) with δ_n(q,Λ)>0, because for fixed q̃∈ℰ_(q)^4nand ∀Λ∈^+* δ_n,min(Λ) ≤δ_n(q̃, Λ)≤δ_n,max (Λ) (where{δ_n,min}, {δ_n,max} are the positive splitting sequences lower and upper bounds of thesolution of the zero dimensional problempreviously presented by definition <ref>).On the other hand, starting from n=3: withH^4(q,Λ)=-δ_3(q,Λ) ∏_ℓ=1, 2, 3H^2(q_i,Λ)Δ_F(q_i) < 0(in view of i) the positivity of H^2) and applying recurrently the sign property of every H^ñ+1 point function with ñ≤ n-2, we obtain for every term in the sum of C^n+1: ∀ i_l≤ n-2 , H^i_l +1=(-1)^i_l -1/2|H^i_l +1|so the corresponding product: verifies: ∏_l=1,2,3 N^(i_l)_1H^i_l+1 (q_i_lΛ)=(-1)^(n-3)/2)∏_l=1,2,3 N^(i_l)_1 (q_i_l|H^i_l+1| andC^n+1 = - 6Λ∑_ϖ_n(I)∏_l=1,2,3 N^(i_l)_1H^i_l+1 (q_i_l,Λ)= (-1)^(n-1)/2|C^n+1|as follows from theorem <ref>ii) By application of definition <ref> and in aanalogous recursiveway as before ini) we obtain directly the proof of the upper and lowerbounds. ∀n=2k+1; k∈ N^*|H^n+1_min(q, Λ)|≤| H^n+1(q,Λ)|≤| H^n+1_max(q, Λ)| ▪ * Proof of proposition <ref> We first remark that in view of property when n≥ 5 the tree terms are bounded by their corresponding value Wetakeinto accountthe result of ref. <cit.> about the number 𝒯_nof different partitions of the set ϖ_n(I) in the sum ∑_ϖ_n(I) precisely: for n=5 and n=7 rspectively :𝒯_5=1; 𝒯_7= 2 and for every n≥ 9 :𝒯_n=(n-3)^2/48+(n-3)/3+1Moreover we used the following notations for the particular terms: C^n+1_(n/3,n/3,n/3)(min)=∏_l=1,2,3 N^(i_l)_1H^i_l+1_(min) [ i_l=n/3; ∀ l=1,2,3 ]C^n+1_(n-2, 1, 1)(max)=∏_l=1,2,3 N^(i_l)_1H^i_l+1_(max) [i_1=n-2; i_l =1 ∀ l=2 , 3 ]i. The sign property is directly obtained by the hypothesis H∈ Φ_R and the sign property i) of theorem <ref> of |H^n+3| [ A^n+1= - Λ H^n+3= -Λ (-1)^n+1/2|H^n+3|=; =Λ (-1)^ 2 (-1)^n-1/2|H^n+3| =(-1)^n-1/2|A^n+1| ▪ ]ATTENTION I L Y A FOUILLIS* n=3In an analogous way:[ ; H^4' (Λ) = - δ_3'(Λ)[H^2']^3; withδ_3^'(Λ)= 6Λ/1+ D_3 D_3=6Λ H^2(3/2- |H^6|/6|H^4||H^2|.; ;|H^6|/|H^4||H^2|≤δ_5max|H^4||H^2|/|H^4|≤60Λ |H^2_max|/1+60Λ d_0; d_0=0.01;D_3>0; ; 3- 0,6 Λ - 20 Λ (1+6Λ^2)^2>0;Λ≤ 0.05; 1+9Λ > D_3; ;⇒ δ_3min <δ_3^'< 6Λ =δ_3,max.;(n=1); H^4'<0,|H^4_min|< |H^4'| <|H^4_max|▪ ]* n=5 In an analogous way:[; H^6' (Λ) = = δ_5^'(Λ)C^6' 60Λ; withδ_5^'(Λ)= 60Λ/1+ D_5 D_5=|B^6|/|H^6|-| A^6|/|H^6| ]We then show that:[ ∀ Λ≤ 0.05; ∃ d_5>d_0; D_5> 60Λ d_5. ]In order thatthe previous condition <ref> be satisfied, and in view ofthe hypothesis H∈Φ_0 (application of bounds and signs in the expression of D_5) it is sufficient to verify the following stronger condition:[ (6+45Λ+60Λ d_0)(1+3Λ 42 d_0); - 2Λ 42H^2_(max)(1+9Λ)- 20 d_5(1+9Λ)(1+3Λ 42 d_0)> 0 ]For fixed d_0=0.01 and Λ∼ 0.05 the latter allows us to require the following stronger condition: 2.4>20d_5 × 2.0197d_5∼0,059>d_0▪ Taking into account the latter together with the earlier resultsfor n=1, and n=3 we finally obtain: H^6'>0,|H^6_min|< |H^6'| <|H^6_max|▪* Proof of proposition <ref> i. The sign property is directly obtained by the hypothesis H∈ Φ_0 [ A^n+1= - Λ H^n+3= -Λ (-1)^n+1/2|H^n+3|=; =Λ (-1)^ 2 (-1)^n-1/2|H^n+3| =(-1)^n-1/2|A^n+1| ▪ ]ii. In order to ensure the decrease property we show that:|A^n+1_max|/n(n-1)|H^n+1_max|< |A^n-1_max|/(n-2)(n-3)|H^n-1_max|or in an equivalent way, bytaking into account the definitions of the splitting sequences and Green's functions in Φ_0<ref>, <ref>, as well as proposition <ref> (upper bounds of tree terms C^n+1 we require:δ_n+2,max T_n+2/n(n-1)< δ_n,max T_n/(n-2)(n-3)or equivalently, by inserting the corresponding expression of the number T_n of different partitions in C^n+1 (cf.<ref>):[ (n+1)(n+2)/1+3Λ(n+1)(n+2)d_0(1+15/n+48/n(n-1))<; n(n-1)/1+3Λ n(n-1)d_0(1+15/(n-2)+48/(n-2)(n-3)) ]When we fixe d_0=Λ=0,05we verify (after some elementary but long manipulations) that the dominant contribution (i.e.∼ n^2) of the function : [ f(n)=f_L(n) - f_R(n); ; f_L(n)= (n+1)(n+2)/1+3Λ(n+1)(n+2)d_0(1+15/n+48/n(n-1)); f_R(n)=n(n-1)/1+3Λ n(n-1)d_0(1+15/(n-2)+48/(n-2)(n-3)); ;] More precisely we find:[ f(n)= 2(2n+1)+ 30 n [1-2/n-2/n^2]/n-2-192n [2-2/n-3/n^2]]/(n-1)(n-2)(n-3)-;-0.0075 30n^2(n+1)(n+2) /(n-2)(n-3)[1+72/30n-198/30n^2] ]Of course, an analogous result is alsoobtainedif we considerthat n=x (as a continuous variable). The derivative of the left hand side of the inequality<ref> is negative when we fixe d_0∼Λ≤ 0.05 and for x >7. In figure <ref> we graphically illustrate the behavior of the function:f_L(n)= (n+1)(n+2)/1+3Λ(n+1)(n+2)d_0(1+15/n+48/n(n-1)) We notice that whenn=x varies continuously from x=7up to x= 4001, f_L(n) decreases continuously up to the limit value of (0.15× 0.05=0,0075)^-1∼ 133.33….This ends the proof of proposition <ref>.▪ * Proof of proposition <ref> i. As previously, the sign property is directly obtained by the hypothesis H∈ Φ_0:[ B^n+1(Λ) = - 3Λ∑_ϖ_n(J)n! j_1!j_2! H^j_2+2(Λ) H^j_1+1(Λ)=; =- 3Λ∑_ϖ_n(J)n! j_1!j_2! (-1)^j_2/2| H^j_2+2|(-1)^j_1-1/2 |H^j_1+1|;=(-1)^n-3/2|B^n+1| ▪ ]ii. In order to ensure the increase property we show that:|B^n+1_min|/n(n-1)|H^n+1_min|> |B^n-1_min|/(n-2)(n-3)|H^n-1_min|The left hand side of <ref> can be expressed in terms of the smaller contribution B^n+1_(j_1, j_2), min (with j_1=n-1/2,j_2= n+1/2)times the number of different partitions ϖ_n(J) in the sum of B^n+1_mini.e.n-1/2; in other words: n-1/2B^n+1_(j_1=n-1/2,j_2= n+1/2)Reminder:B^n+1_(j_1=n-1/2,j_2= n+1/2)=n ! H^(j_1+1)H^(j_2+2) ((n-1)/2)!((n+1)/2)! In a similar way the r.h.side B^n-1_min can besubstituted byn-3/2B^n-1_(j_1=n-1/2,j_2= n-3/2)So, by taking into account the definitions of the splitting sequences and Green's functions in Φ_0<ref>, <ref>, as well as proposition <ref> (upper bounds of tree terms C^n+1 we require in a equivalent way (after some elementary simplifications) the sufficient condition:[ 4δ_n+5/2,min𝒯_n+5/2(n-2)/δ_n,min𝒯_n'(n+1)> 1; ; ; ; f_B= (n-2)(n+5)(n+3)[1+3Λ n(n-1)][(n-1)^2+64(n-1)+192]/(n+1)[4+3Λ (n+5)(n+3)]n(n-1)[(n-3)^2+16(n-3)+48]> 1 ]The difference between the numerator and the denominator of f_B, let us say Num_fB- Den_fB, is a polynomial with positive coefficients of the dominant contributions: Num_fB- Den_fB ∼ ( O(n^7)+ O(n^6)+ O(n^5))>0 Another way to be convinced thatthe condition <ref> is verified is to put n= x (continuous) and represent graphically the function f_B(x) at fixed Λ=0.05. The figure <ref> shows precisely that f_B decreases continuously always from larger values than 1up to the limit value of 1.This ends the proof of proposition <ref> ▪IMPORTANT REMARK CONCERNING DISCUSSION ABOUT d and d_0 of Φ^4_0 §.§ The different aspects of the analysis We have studied three different aspects of our analysis, consequently we have obtained three sets of figures, that we describe in details in the following. The general conclusion of this numerical experience appears clearly the same in all the three sets.We notice that the first three orders of the iteration of δ_n,max and δ_n,max yield different curves which become closer and closer to each other until the forth order iteration. There, we observe aperfect coincidence of δ_n,max and δ_n,min forth, fifth and sixth order.So, when the value of Λ lies in [0.001, 0.1], theneighborhood where lies the fixed point of the contractive mappingis manifestly around the {H_0} sequence(first order iteration of{δ_n,min} sequence). This fact is enhanced by the following observation:For a given value of Λ we remarkthat the sequence {δ_n, max} decreases during the iteration procedure (resp the sequence {δ_n, min} increases). The two sets are almost the same up to the forth iteration. We notice that the decreasing rate of {δ_n, max} is more important than the increasing rate {δ_n, min}, and this again underlines the fact that the {H_0} neighborhood is the best for the local contractivity.This result is more satisfactory (from the point of view of thebound of Λ) in comparaison with the theoretical proof ofthe validity of the contractivity criteriun at Λ≤ 0.01.§.§.§ First set of figuresThe first set of figures displays the convergence proof of the mapping for different values of Λ, using as stating points both δ_n,max andδ_n,min. This setrepresents the results of twenty iterations of the mapping for different values of Λ(i.e Λ∈{ 0.001, 0.01, 0.03, 0.05, 0.075, 0.1 }) at n fixed. We have chosen ten different values of n: n=7, 9, 11, 13, 15, 17, 19, 21, 23, 25.The stability of the values is already attained at the tenthiteration for all values of n.§.§.§ Second set of figures The second set displays the convergence of the mapping up to the sixth iteration, for different values of Λ.§.§.§ Third setThe third set of figuresdisplays the summaryof the previous configuration, for the sixth iteration. This figurerepresents the results of the mapping of δ_n functions,for n=7, 9, 11, 13, 15, 17, 19, 21, 23, 25. for different values of Λ(i.e Λ∈{ 0.001, 0.01, 0.03, 0.05, 0.075, 0.1 }) and for all the 20 iterations. The figure illustrates clearly the convergence of the iteration to the fixed point. We remark also that the convergence is more rapid for sufficiently small values of Λ (and even for bigger than the critical point 0.03, due to the small values of n). Our experience shows that the stability is impossible for example n=1000 when Λ becomes bigger than 0.05. This figurerepresents the results of the mapping of δ_n functions as surfaces of n and ν for fixed Λ (at the six different values).We remark in this figure that:*For small values of Λ (Λ≤ 0.001), the "decrease" properties of δ_n's are not apparent.*For the intermediate values ("good values") of Λ, the surfaces show the expected concavity as far as ν (iteration number) increases.*For large values of Λ (bigger than the critical value 0.03), we observe a rapid increasing surface (because we are far from the stability and contractivity criteria). ]
http://arxiv.org/abs/1706.08758v1
{ "authors": [ "Marietta Manolessou" ], "categories": [ "math-ph", "math.MP" ], "primary_category": "math-ph", "published": "20170627100225", "title": "Local contractivity of the $Φ_4^4$ mapping" }
(Department of Mathematical Sciences, Loughborough University, Loughborough, LE11 3TU, UKDepartment of Mathematical Sciences, Loughborough University, Loughborough, LE11 3TU, UKH. H. Wills Physics Laboratory, University of Bristol, Bristol, BS8 1TL, UK In classical density functional theory (DFT) the part of the Helmholtz free energy functional arising from attractive inter-particle interactions is often treated in a mean-field or van der Waals approximation. On the face of it, this is a somewhat crude treatment as the resulting functional generates the simple random phase approximation (RPA) for the bulk fluid pair direct correlation function. We explain why using standard mean-field DFT to describe inhomogeneous fluid structure and thermodynamics is more accurate than one might expect based on this observation. By considering the pair correlation function g(x) and structure factor S(k) of a one-dimensional model fluid, for which exact results are available, we show that the mean-field DFT, employed within the test-particle procedure, yields results much superior to those from the RPA closure of the bulk Ornstein-Zernike equation. We argue that one should not judge the quality of a DFT based solely on the approximation it generates for the bulk pair direct correlation function. The standard mean-field treatment of inter-particle attraction in classical DFT is better than one might expect Robert Evans ===============================================================================================================§ INTRODUCTION Classical density functional theory (DFT)<cit.> is based on the idea that the thermodynamic grand potential of an inhomogeneous fluid can be expressed as a functional of the average one-body density profile ρ(). Minimizing an (approximate) functional with respect to ρ() provides an estimate of the equilibrium density profile and the grand potential. DFT has proved to be a versatile tool for determining the thermodynamic quantities arising in the physics of adsorption and fluid interfaces. These include, for example, interfacial tensions<cit.> and the solvation force (excess pressure) for confined fluids.<cit.> Since DFT provides directly the grand potential it is particularly well-suited to investigations of surface phase behaviour and perhaps it is here where DFT has had most success, revealing new phenomena and new physical insight.<cit.> DFT also provides a direct measure of density fluctuations in the inhomogeneous fluid via the local compressibility (∂ρ()/∂μ), where μ is the chemical potential and recent papers have investigated this quantity for liquids at solvophobic planar substrates<cit.> and confined between a variety of nanostructures.<cit.> In addition to the one-body density, higher order correlation functions can be obtained by taking further functional derivatives of the free energy functional. In particular, the two-body pair direct correlation function c^(2)(,') is proportional to the second functional derivative of the excess Helmholtz free energy with respect to the density.<cit.> It is tempting to assess the quality of an approximate DFT for a given model fluid by comparing the resulting c^(2)(r) for a homogeneous fluid with that obtained from computer simulation of the radial distribution function g(r) or structure factor S(k) or from integral equation theories; see Refs. evans1992density, lutsko, hansen2013theory and references therein. In this paper we explain why this might not be the best means of testing the accuracy of a free energy functional.We focus on the much-used excess Helmholtz free energy functional [Eq. (<ref>) below] that treats the attractive inter-particle interactions in a mean-field (MF) or van der Waals approximation. This standard MF DFT is considered the `work-horse' for applications of DFT to interfaces, adsorption and many other inhomogeneous situations, see e.g. Refs. evans1992density, lowen2002density, lutsko, hansen2013theory, evans1987phase, stewart2014layering, evans2015local, chacko2017solvent. Taking two derivatives of this functional and evaluating for a uniform density ρ_b, corresponding to the (bulk) fluid, leads to the pair direct correlation function c^(2)_RPA(r) that forms the basis for the well-known random phase approximation (RPA).<cit.> Recognizing this connection between the MF DFT and the RPA, one might assume that the fluid structure, thermodynamics and phase behaviour predicted by the MF DFT is of similar quality to that resulting from the RPA closure to the bulk Ornstein-Zernike (OZ) equation. Here, we argue: (i) Results from the MF DFT are more accurate than one might expect from examining bulk pair correlation functions extracted from the RPA closure to the OZ equation and (ii) one should not judge the quality of the MF DFT, or any other approximate DFT, solely upon what the approximate functional generates by taking two functional derivatives. We make these arguments explicit by considering the test-particle limit of Percus,<cit.> i.e. we calculate the inhomogeneous fluid density profile ρ(r) around a fixed particle of the same type as the surrounding fluid. This enables us to calculate the fluid radial distribution function g(r)=ρ(r)/ρ_b. By rewriting the Euler-Lagrange equation obtained from minimizing the MF DFT in this test particle limit and comparing with the OZ equation, we identify additional terms somewhat akin to a hybrid closure relation and certainly different from the RPA. The additional terms also suggest that the MF DFT, treated in the test particle limit, is much superior to standard OZ with the RPA closure. We argue that this observation should carry through more generally for arbitrary external potentials, not just in the test-particle limit.Our paper proceeds as follows: In Sec. <ref> we describe briefly the RPA in the context of the OZ equation. In Sec. <ref> we describe the MF DFT that generates the RPA and indicate why one might expect this DFT to yield much better results for g(r), within the test-particle procedure, than in the standard RPA treatment. In Sec. <ref> we illustrate and confirm our arguments by presenting results for g(x) and S(k) for a model one-dimensional (1D) fluid where the pair correlation functions are known exactly and where the MF DFT can be implemented with the exact reference free energy functional, i.e. that pertaining to hard-rods. Finally, in Sec. <ref> we discuss the general implications of our results. We also explain that for the same choice of reference system (hard rods) the MF DFT results are identical to those one would obtain from the local molecular field theory (LMF) of Weeks and co-workers<cit.> for this particular model fluid.§ INTEGRAL EQUATION APPROACH: OZ EQUATION AND RPA The two-body pair direct correlation function, c^(2)(r), is usually defined via the OZ equation,<cit.> which for a uniform and isotropic fluid ish(r) = c^(2)(r) + ρ_b ∫' c^(2)(|-'|) h(r')where h(r) is the total correlation function and ρ_b is the density of the (bulk) fluid. The OZ approach to calculating h(r) is to split the correlations present in h(r) into a direct part, which describes the `direct' correlations acting over a range of order that of the interaction pair potential, and an `indirect' part, i.e. the remainder described by the convolution integral. Note that the total correlation function h(r) = g(r) - 1, where g(r) is the radial distribution function. Fourier transforming Eq. (<ref>) yields an algebraic relation:ĥ(k) = ĉ(k)/1-ρ_bĉ(k),where ĥ(k) and ĉ(k) are the Fourier transforms of h(r) and c^(2)(r), respectively. ĥ(k) is closely related to the static structure factor<cit.> S(k) = 1 + ρ_bĥ(k). Consider a fluid composed of particles interacting via the pair potential v(r). To calculate h(r) one must supplement the OZ Eq. (<ref>) with a further, closure relation between c^(2)(r) and h(r). The exact closure relation is usually expressed as<cit.> c^(2)(r)= h(r) - ln(h(r)+1) - β v(r) + B(r)where β =(k_B T)^-1, k_B is Boltzmann's constant, T is the temperature and B(r) is termed the bridge function. B(r) is not known exactly for any 3D fluid. In order to make progress approximations must be made. For example, the hypernetted-chain approximation (HNC) corresponds to B(r) = 0.<cit.> Formally B(r) is the sum of the bridge or elemental diagrams missing in HNC. Suppose v(r) can be split as follows:v(r)=v_0(r)+v_1(r),where v_0(r) is a suitably chosen reference potential, usually the purely repulsive part of v(r). Then the remainder v_1(r) usually incorporates the attractive part of the interaction between particles. The simple closure relationc_RPA^(2)(r)≡ c_0^(2)(r) -β v_1(r),where c_0^(2)(r) is the pair direct correlation function for the (purely repulsive) reference system with the same density ρ_b, defines the RPA. Note that Eq. (<ref>) enforces the correct asymptotic behaviour: c^(2)(r)∼-β v(r), r→∞ for a fluid away from its critical point. Inserting the Fourier transform of (<ref>) into (<ref>) yields the standard RPA result for the structure factor:<cit.> S_RPA(k) = S_0(k)/1 + ρ_bβv̂_1(k)S_0(k)where S_0(k) is the structure factor of the reference system. The Fourier transform v̂_1(k) is assumed to exist. Often the further approximation c_0^(2)(r)≈ c_HS^(2)(r) is made, where c_HS^(2)(r) is the pair direct correlation function for a hard-sphere (HS) fluid at the same density with suitably chosen effective particle diameter σ.<cit.> Recall that accurate expressions for the reference c_HS^(2)(r) exist and for a fluid of 1D hard-rods c_HS^(2)(r) is known exactly (see below). The RPA closure relation in Eq. (<ref>) has been used extensively in the theory of simple and complex liquids.<cit.> The reliability of the corresponding OZ result for the structure factor (<ref>) depends on the particular model system and the choice of reference potential.§ THE MEAN-FIELD DFT AND THE RPA§.§ The non-uniform fluid Consider now the fluid composed of particles interacting via the pair potential v(r), split as in Eq. (<ref>)and subject to an external potential V(). The corresponding one-body density is ρ(). Suppose too that we have an accurate DFT for the reference system, indicated by the subscript “0”, i.e. with particles interacting via the potential v_0(r). The intrinsic Helmholtz free energy functional can be written as:<cit.> F[ρ()]= F_0 [ρ()]+ 1/2∫_0^1 dλ∫∫'ρ_λ^(2)(,')v_1(|-'|),where F_0[ρ] is the corresponding functional for the reference system. This exact expression is obtained from a thermodynamic integration `turning on' the potential v_1(r) between the particles via the integration parameter λ. The two-body density distribution function ρ_λ^(2)(,') is that for the system with interaction potentialv_λ(r)=v_0(r)+λ v_1(r), 0≤λ≤1.In deriving Eq. (<ref>) one must impose an external potential, varying with λ, that ensures the equilibrium one-body density remains ρ() at each value of λ.<cit.> The standard MF DFT approximation is obtained by assuming that (i) ρ_λ^(2)(,') does not change much as λ is varied from 0 to 1 and, more drastically, (ii)ρ_λ^(2)(,')≈ρ()ρ('), 0≤λ≤1.Eq. (<ref>) clearly constitutes a mean-field treatment of the `perturbation' λ v_1(r). It follows that the MF DFT approximation for the excess (over ideal) free energy functional[The correlations neglected in (<ref>) are incorporated into the functional F_corr[ρ] defined in Eq. (3.4.11) of Ref. hansen2013theory.] is:F^ex[ρ()]≈ F_0^ex[ρ()]+1/2∫∫'ρ()ρ(')v_1(|-'|).Within DFT<cit.> two functional derivatives of -β F^ex[ρ] with respect to the density yields the pair direct correlation function. From Eq. (<ref>), and evaluating for a uniform (bulk) fluid ρ()=ρ_b, we obtain the RPA approximation in Eq. (<ref>), sincec_0^(2)(|-'|)=-δ^2 β F_0^ex[ρ]/δρ()δρ(')|_ρ()=ρ_b.The approximations inherent in Eq. (<ref>) imply correlations are omitted and so one must be sceptical about the accuracy of the MF functional (<ref>). Recognising that taking two functional derivatives of (<ref>) yields the RPA (<ref>, <ref>) for bulk correlation functions provides some useful insight into the status of the MF DFT. It is tempting then to argue that employing the MF DFT (<ref>) should lead to results with similar accuracy to those obtained from the RPA for bulk liquids. However, this argument is at best misleading. In practical applications of (<ref>), or any other DFT approximation, one works at the one-body level which requires only a single functional derivative. We explain and illustrate this below within the context of the test particle procedure for calculating g(r). §.§ The Percus test particle procedure Percus proved<cit.> that one can determine the radial distribution function g(r) by calculating the density profile ρ()=ρ(r) around a fixed particle that exerts an external potential V()≡ v(r) on the fluid. Then the radial distribution function g(r)=ρ(r)/ρ_b. Within DFT, ρ(r) is obtained by minimising the grand potential functional Ω[ρ]=F[ρ]-∫[μ-V()]ρ(), where μ is the chemical potential. Using (<ref>), the resulting Euler-Lagrange equation isδΩ[ρ]/δρ=k_BT ln[Λ^3ρ(r)] +δ F_0^ex[ρ]/δρ +∫'ρ(r')v_1(|-'|)+v(r)-μ=0,where Λ is the (irrelevant) thermal de-Broglie wavelength. For r→∞, away from the fixed test-particle, the density ρ(r)→ρ_b, so within the approximation Eq. (<ref>) we obtain the following relation between the chemical potential μ and the bulk density ρ_b:μ=k_BTln[Λ^3ρ_b] +δ F_0^ex[ρ]/δρ|_ρ_b+ρ_b∫ v_1(r).We make a functional Taylor expansion about the bulk density:δ F_0^ex[ρ]/δρ=δ F_0^ex[ρ]/δρ|_ρ_b +∫'(ρ(')-ρ_b)δ^2F_0^ex[ρ]/δρ()δρ(')|_ρ_b +H_0[ρ()],where H_0[ρ()] denotes all higher order terms; these are ∼ O([ρ-ρ_b]^2) and higher. From Eqs. (<ref>)–(<ref>), we obtain:0=k_BTln(ρ(r)/ρ_b)+∫'(ρ(r')-ρ_b)[-k_BTc_0^(2)(|-'|)+v_1(|-'|)]+H_0[ρ(r)]+v(r),which eliminates μ. Multiplying through by -β and adding (ρ(r)-ρ_b)/ρ_b to both sides we obtain:(ρ(r)-ρ_b)/ρ_b=(ρ(r)-ρ_b)/ρ_b-ln(ρ(r)/ρ_b)-β v(r)-β H_0[ρ(r)] +ρ_b∫'(ρ(r')-ρ_b)/ρ_b[c_0^(2)(|-'|)-β v_1(|-'|)].This is the equation for the density profile, equivalent to g(r), in the test particle treatment of Percus, as determined by the MF DFT in Eq. (<ref>). If we set v_1(r)=0, then we obtain the following equation for the total correlation function h_0(r)=g_0(r)-1 of the reference system:h_0(r)=h_0(r)-ln(h_0(r)+1)-β v_0(r)-β H_0[ρ_bg_0(r)]+ρ_b∫'h_0(r')c_0^(2)(|-'|). Suppose we know the exact functional F_0[ρ], and therefore c_0^(2)(r), then comparison with Eq. (<ref>)and use of the OZ equation (<ref>) allows us to identify -β H_0[ρ_bg_0(r)] as the exact bridge-function B_0(r) of the reference system.Returning to the full system, we see that Eq. (<ref>) is an Ornstein-Zernike-like equation [see Eq. (<ref>)] with the RPA closure (<ref>) for the pair direct correlation function c^(2)(r) inside the convolution integral, but with a different closure approximation for c^(2)(r) [see Eq. (<ref>)] appearing outside. Specifically, the sum of the first four terms on the right-hand side of (<ref>) correspond formally to the exact expression for c^(2)(r) but with the bridge function B(r) replaced by B_0(r).The form of Eq. (<ref>), which follows from the standard MF DFT functional (<ref>), suggests that calculating the radial distribution function g(r) via the test-particle route might yield results better than those given by g_RPA(r), obtained by solving the OZ equation together with the RPA closure (<ref>), i.e. by Fourier inverting the RPA structure factor (<ref>). This is evident for a fluid in which the pair potential v(r) has a hard-core of diameter σ since solving the Euler-Lagrange equation (<ref>) guarantees the exact core condition ρ(r) = 0, r < σ is satisfied which is, of course, not the case for g_RPA(r), given by (<ref>). In the latter g_RPA(r) is not identically zero inside the hard-core. We note that the application of any reasonable non-local DFT in the test particle procedure enforces the core condition. In the footnote[There are, of course, other theories that enforce the hard-core condition on g(r). The well-known Mean-Spherical Approximation (MSA)<cit.> enforces this and sets c^(2)(r) = -β v(r) outside the core. Perhaps more pertinent to our present discussion is the Optimized RPA (ORPA).<cit.> The ORPA invokes the RPA closure (<ref>) but, in addition, seeks to vary the perturbation potential v_1(r) inside the hard core, where this is not uniquely defined, so that g(r)=0. This constraint is equivalent to requiring the functional derivative of the RPA free energy with respect to v_1(r) to be zero inside the hard core. In our present approach, there is no attempt to minimize the free energy with respect to the perturbation potential, so for r>σ, g(r) depends weakly on the choice of v_1(r) inside the core. There is no reason to expect the MF DFT to perform as well as the ORPA – at least for a Lennard-Jones type fluid at high densities where the ORPA is known to be very accurate.<cit.> For completeness, we also mention the Reference HNC (RHNC), see e.g. Ref. hansen2013theory, which improves upon the HNC integral equation by approximating the bridge function by that of a reference system, usually hard-spheres. Although the integral equation (<ref>) that emerges from our MF DFT shares a feature of the RHNC, in that B(r) is replaced by B_0(r), this is implemented only in one part of the right hand side. There is no reason to expect the MF DFT to be as accurate as the RHNC.] we mention briefly relationships to other theories of liquids.This observation concerning the test-particle procedure has repercussions for more general external potentials V(). Solving the corresponding Euler-Lagrange equation, based on the seemingly crude MF functional (<ref>), yields equilibrium density profiles ρ() that are often very accurate – see Refs. evans1992density, hansen2013theory and references therein. Investigation of hard/impenetrable potentials is once again illuminating. For a planar hard wall, with V(z) =∞ for z<0, the density profile satisfies ρ(z) =0, z<0. Moreover, for a sensible choice of a (non-local) DFT for the reference system the profile will satisfy<cit.> the wall contact sum-rule: βρ(0^+)=p(ρ_b). The right hand side is the pressure of the bulk fluid, far from the hard wall, obtained from the bulk free energy F[ρ_b] with ρ_b=ρ(∞).In the following Sec. <ref> we compare the results for g_RPA(x) and S_RPA(k)(<ref>) with those obtained from standard MF DFT (<ref>), implemented within the Percus test particle prescription for a 1D fluid. The reference system is the hard-rod fluid for which the free energy functional is known exactly, and therefore its correlation functions and thermodynamics. Moreover, the pair correlation functions and the thermodynamics of the full system for uniform (bulk) densities are also known exactly. By considering this model fluid we can make a stringent examination of some of the basic approximations employed in classical DFT.§ ILLUSTRATIVE RESULTS FOR A 1D FLUID In order to illustrate and support the observations made above, we consider a system of 1D hard-particles (rods on a line) with an additional attractive interaction between pairs of neighbouring rods. As mentioned above, this 1D system is chosen because we know the ingredients of the theory exactly and can therefore test carefully the accuracy of the various approximations. For a fluid of 1D rods with only nearest neighbour interactions v(x), one finds the following exact expression for the structure factor<cit.> S(k)= 1 - e^ -β [ μ(p+ik/β) - 2μ(p) +μ(p-ik/β) ] / ( 1 - e^-β[ μ(p+ik/β) - μ(p) ] ) ( 1 - e^-β[ μ(p-ik/β) - μ(p) ])where p is the pressure and μ(p) is the chemical potential which are known exactly.<cit.> The above equation along with Eqs. (<ref>)–(<ref>) can be used to obtain the distribution functions of the uniform 1D system.We consider 1D rods on a line interacting via the pair potential v(x)=v_0(x)+v_1(x), where x is the distance between the centres of a pair of particles – c.f. Eq. (<ref>). v_0(x) is the hard-rod potentialv_0(x) = {[ ∞|x| ≤σ; 0 |x| > σ ].and the attractive tail potential (also considered in Ref. archer2013relationship) is:β v_1(x) = {[ 0|x| ≤σ; -z_p(σ +σ_p - |x| )σ < |x| ≤σ+σ_p; 0|x| > σ+σ_p. ].The above potential is the 1D analogue of the Asakura-Oosawa potential for the effective colloid-colloid potential of hard-sphere colloids of diameter (length) σ mixed with ideal polymers: σ_p is the `length' of polymer coils and z_p is the fugacity of ideal polymers.<cit.>For the hard-rods reference system “0” Percus<cit.> derived an exact expression for the excess Helmholtz free energy functionalF_0^ex[ρ]≡ F_HR^ex[ ρ ]= - ∫ρ(x) ln[ 1 - t(x) ]dx,where the weighted density ist(x) = ∫_x-σ^x ρ(x') dx'.Taking the functional derivative of (<ref>) and adding the contribution from the ideal-gas free energy F^id[ρ], we obtainδ F_0[ρ]/δρ = lnΛρ(x)/1-t(x) +∫_x^x+σρ(x')/1-t(x')dx'.By differentiating again and using (<ref>) the exact direct correlation function of a uniformfluid of hard-rods of length σ and density ρ_b is derived:c^(2)_HR(|x-x'|) = -Θ(σ-|x-x'|)1-ρ_b|x-x'|/(1-ρ_bσ)^2, where Θ is the Heaviside step function. This can be used to construct the RPA approximation (<ref>) for the 1D system.Comparison of the exact solution (solid line) with DFT results [calculated using the test particle equation (<ref>) with (<ref>) and v(x) (dotted)] and results for g_RPA(x) and S_RPA(k) from the RPA approximation to the OZ equation (<ref>)with v_1(x) (dashed), is shown in Figs. <ref>, <ref> and <ref>.[Note that for λ =1, S_MFλ(k), introduced in Eq. (41) of Ref. archer2013relationship, is identical to S_RPA(k) defined here.]In Fig. <ref> we fix the bulk density ρ_bσ = 0.5 and the attraction range σ_p=0.9σ while varying the attraction strength parameter z_p. In Fig. <ref> we fix the attraction strength σ z_p =1 and range σ_p=0.9σ, while varying the bulk density ρ_bσ. Finally, in Fig. <ref> we fix the attraction strength σ z_p =2 and the bulk density ρ_bσ=0.5 while varying the attraction range σ_p.As expected, g_RPA(x) fails to satisfy the core condition g(x) = 0 when x<σ, whereas the DFT enforces this. Outside the core of the hard-rod, both the DFT and RPA differ from the exact solution. However, the DFT results for g(x) are much closer to the exact solution. As we increase the attraction strength or the density, both the DFT and RPA results deviate increasingly from the exact solution. g_RPA(x) displays only weakly damped oscillations. The same type of pattern is also observed in the sequence of structure factors displayed in Figs. <ref>(b) and <ref>(b). The RPA greatly overestimates the height of the first peak in S(k) as z_pσ (attractive well-depth) or ρ_bσ are increased. It is important to note that when these parameters are increased beyond the values considered here, S_RPA(k) diverges at the first peak.<cit.>Note that for a given choice of pair potential the number density at which the divergence occurs is identical within both the RPA and MF DFT since the linearized form of both theories is identical; only the terms that are non-linear in h(x) are different. However, the height of the first peak in the structure factor obtained from the MF DFT is always lower and closer to the exact value. It is also noteworthy that both the RPA and MF DFT capture well the behaviour in S(k) at larger kσ.The results in Fig. <ref> show that for fixed attraction strength z_p both the MF DFT and the RPA become less accurate as the range of the potential σ_p increases. In particular the height of the first peak in S(k) is overestimated. It appears that the integrated strength of the attraction, measured by the product z_pσ_p, is important in determining under what conditions both approximations are accurate.§ DISCUSSION In order to assess the physical content, and hence the usefulness, of an approximation for the excess free energy functional, it is not sufficient to take two functional derivatives, compute the bulk fluid pair direct correlation function c^(2)(r), and then enquire how well this quantity performs when used to calculate the pair correlation function h(r) via the OZ route, Eqs. (<ref>)-(<ref>). In most practical applications of DFT one is concerned with solving the Euler-Lagrange equation for the one-body density profiles and calculating the associated thermodynamic quantities (free energies) that result from minimizing a given (approximate) grand potential functional. This requires taking only a single functional derivative–not two. We have illustrated this point of view by focusing on the standard MF DFT, defined by (<ref>), which treats the attractive part of the inter-particle potential via a simple MF approximation. Working in 1D and computing the density profile induced by a test particle exerting the potential (<ref>, <ref>), we determined g(x) and S(k) from the MF DFT. These were compared with the exact results and with those obtained using (<ref>) with the OZ equation. The latter corresponds to the usual RPA, given by (<ref>). That the test particle route yields more accurate structure, within a DFT treatment, is not surprising. What is significant is that the MF DFT applied in the test particle situation performs much better for strong inter-particle attraction and for higher densities than the standard RPA. This is evident in Figs. <ref>–<ref>.That it is generally more appropriate to assess the performance of an approximate functional at the one-body rather than at the two-body level is known to the DFT community. The former requires only a single functional derivative with respect to density whereas the latter requires two. Naturally errors build up as further derivatives are taken. Here we are concentrating upon the efficacy of the particular functional (<ref>). Why might the results, from what appears to be a crude approximation, be much better at the one-body density and free energy level than one might expect? We give three separate arguments: (i) as indicated in Sec. <ref>, the Euler-Lagrange equation (<ref>) for ρ(r), when viewed as an integral equation for g(r), implies a more sophisticated closure approximation than the RPA (<ref>) implemented directly in the OZ equation (<ref>), whichleads to the RPA expression (<ref>). (ii) Oettel <cit.> discusses (<ref>) in the context of a powerful and rather general reference functional approach for constructing approximate free energy functionals. By invoking the assumption that the bridge functional for the full system is well-approximated by that of the reference system and by considering expansions about the bulk density he argues that the reference functional approach predicts roughly MF behaviour for the density deviations (from bulk). He concludes that for adsorption problems, such as wetting and drying, a description based on the MF DFT (<ref>), with an accurate reference functional F_0[ρ], should capture all the essential physical features. Oettel<cit.> also emphasizes that (<ref>) has the advantage, over the more sophisticated reference functional approach, of satisfying identically the Gibbs adsorption equation and the wall-contact sum rule. (iii) Weeks and co-workers<cit.> introduced a local molecular field theory (LMF) that has proved to be highly successful in describing the structure and thermodynamics of a variety of non-uniform liquids. The derivation<cit.> of the LMF equation for the effective reference field ϕ_R() starts with the Yvon-Born-Green equation and uses insightful arguments about the form of the conditional singlet densities ρ(|') in the full and reference (mimic) systems. It does not employ concepts from DFT. Although LMF operates at the one-body level, like MF DFT, at first sight there does not appear to be a direct connection between the two approaches. This is not the case. Archer and Evans<cit.> showed that the LMF equation follows directly from the standard mean-field treatment of attractive interactions as embodied in MF DFT (<ref>) and if one has access to the exact functional F_0[ρ] for the same reference system the two theories are equivalent. We note that the derivation of the LMF equation and the relation to DFT is also discussed in the Supporting Information in a recent paper on solvation free energies.<cit.> In the 1D system described in Sec. <ref> the free energy functional of the reference (hard-rod) fluid is known exactly; it is given by the Percus result (<ref>). It follows that our present results for g(x) obtained using MF DFT and the test particle route are identical to those that would emerge from LMF using hard-rods as the reference system. More generally, in three dimensions, LMF with a hard-sphere reference system would lead to the same g(r) as MF DFT using the test particle route and a very accurate hard-sphere functional for F_0[ρ]. Given the success of LMF for a wide variety of fluids, one might argue, albeit circuitously, that the physical arguments and plausible approximations that lead to LMF<cit.> provide an alternative justification as to why MF DFT might perform better than one might expect. Of course, there is a caveat. The justification for LMF relies upon the judicious choice of reference fluid, described by v_ 0(r), so that the LMF equation used to treat v_ 1(r), the longer ranged part of the pair potential, captures the essential physics for a given model fluid.<cit.> Although the MF DFT treatment of v_1(r) is formally equivalent to that of the LMF, the limitation and drawback of the former is finding an accurate free energy functional for the reference fluid.<cit.>As a final note of caution on assessing the quality of a DFT on the basis of what two derivatives of the free energy functional yields for the (bulk) pair direct correlation function, one should also recall the following functional:F^ex[ρ()]≈ F^ex[ρ_b]+μ_ex∫(ρ()-ρ_b) -1/2 β∫∫'(ρ()-ρ_b)(ρ(')-ρ_b)c(|-'|).This is the well-known Ramakrishnan-Yousouf functional,<cit.> constructed to yield the `exact' pair direct correlation function, c(r)=c^(2)_exact(r), with c^(2)_exact(r) calculated at the relevant bulk density ρ_b.[Note that minimizing the grand potential corresponding to (<ref>) within the test particle procedure leads to the HNC closure for the uniform fluid, provided the direct correlation function is determined self-consistently using the OZ equation<cit.>] However, the functional (<ref>) has significant weaknesses. For example, it is unable to describe wetting or drying phenomena<cit.> at substrates nor critical adsorption<cit.> owing to the fact that it is only quadratic in the density deviation.§ ACKNOWLEDGEMENTSB.C. is supported by an EPSRC studentship and R.E. by a Leverhulme Emeritus Fellowship: EM-2016-031.33 fxundefined [1]ifx#1 fnum [1] #1firstoftwosecondoftwo fx [1] #1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0] ` 12 `$12 `&12 `#12 `1̂2 `_12 `%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty [Evans(1979)]evans1979nature authorauthorR. Evans, @noop journaljournalAdv. Phys. volume28, pages143 (year1979)NoStop[Evans(1992)]evans1992density authorauthorR. Evans, in @noop booktitleFundamentals of Inhomogeneous Fluids, editoredited by editorD. Henderson (publisherMarcel Dekker, addressNew York, year1992)Chap. chapter3, pp. pages85–176NoStop[Löwen(2002)]lowen2002density authorauthorH. Löwen, @noop journaljournalJ. Phys.: Condens. Matter volume14, pages11897 (year2002)NoStop[Lutsko(2010)]lutsko authorauthorJ. F. Lutsko, @noop journaljournalAdv. Chem. Phys. volume144, pages1 (year2010)NoStop[Hansen and McDonald(2013)]hansen2013theory authorauthorJ.-P. Hansen and authorI. R. McDonald, @noop titleTheory of Simple Liquids: With Applications to Soft Matter, 4th ed. (publisherElsevier, year2013)NoStop[Evans and Marconi(1987)]evans1987phase authorauthorR. Evans and authorU. M. B. Marconi, @noop journaljournalJ. Chem. Phys. volume86, pages7138 (year1987)NoStop[Stewart and Evans(2014)]stewart2014layering authorauthorM. C. Stewart and authorR. Evans, @noop journaljournalJ. Chem. Phys. volume140, pages134704 (year2014)NoStop[Evans and Stewart(2015)]evans2015local authorauthorR. Evans and authorM. C. Stewart, @noop journaljournalJ. Phys.: Condens. Matter volume27, pages194111 (year2015)NoStop[Chacko et al.(2017)Chacko, Evans, and Archer]chacko2017solvent authorauthorB. Chacko, authorR. Evans, and authorA. J. Archer,@noop journaljournalJ. Chem. Phys.volume146, pages124703 (year2017)NoStop[Likos(2001)]likos2001effective authorauthorC. N. Likos, @noop journaljournalPhys. Rep. volume348, pages267 (year2001)NoStop[Barker and Henderson(1976)]barker1976liquid authorauthorJ. A. Barker and authorD. Henderson, @noop journaljournalRev. Mod. Phys. volume48, pages587 (year1976)NoStop[Percus(1962)]percus1962approximation authorauthorJ. K. Percus, @noop journaljournalPhys. Rev. Lett. volume8, pages462 (year1962)NoStop[Weeks et al.(1997)Weeks, Vollmayr, and Katsov]weeks1997intermolecular authorauthorJ. D. Weeks, authorK. Vollmayr, and authorK. Katsov,@noop journaljournalPhysica Avolume244, pages461 (year1997)NoStop[Weeks et al.(1998)Weeks, Katsov, and Vollmayr]weeks1998roles authorauthorJ. D. Weeks, authorK. Katsov, and authorK. Vollmayr,@noop journaljournalPhys. Rev. Lett.volume81, pages4400 (year1998)NoStop[Weeks(2002)]weeks2002connecting authorauthorJ. D. Weeks, @noop journaljournalAnn. Rev. Phys. Chem. volume53, pages533 (year2002)NoStop[Note1()]Note1 noteThe correlations neglected inare incorporated into the functional F_corr[ρ ] defined in Eq. (3.4.11) of Ref. @citealpnum hansen2013theory.Stop[Note2()]Note2 noteThere are, of course, other theories that enforce the hard-core condition on g(r). The well-known Mean-Spherical Approximation (MSA)<cit.> enforces this and sets c^(2)(r) = -β v(r) outside the core. Perhaps more pertinent to our present discussion is the Optimized RPA (ORPA).<cit.> The ORPA invokes the RPA closurebut, in addition, seeks to vary the perturbation potential v_1(r) inside the hard core, where this is not uniquely defined, so that g(r)=0. This constraint is equivalent to requiring the functional derivative of the RPA free energy with respect to v_1(r) to be zero inside the hard core. In our present approach, there is no attempt to minimize the free energy with respect to the perturbation potential, so for r>σ, g(r) depends weakly on the choice of v_1(r) inside the core. There is no reason to expect the MF DFT to perform as well as the ORPA – at least for a Lennard-Jones type fluid at high densities where the ORPA is known to be very accurate.<cit.> For completeness, we also mention the Reference HNC (RHNC), see e.g. Ref. @citealpnum hansen2013theory, which improves upon the HNC integral equation by approximating the bridge function by that of a reference system, usually hard-spheres. Although the integral equationthat emerges from our MF DFT shares a feature of the RHNC, in that B(r) is replaced by B_0(r), this is implemented only in one part of the right hand side. There is no reason to expect the MF DFT to be as accurate as the RHNC.Stop[Percus(1982)]percus1982one authorauthorJ. K. Percus, @noop journaljournalJ. Stat. Phys. volume28, pages67 (year1982)NoStop[Takahashi(1942)]takahashi1942simple authorauthorH. Takahashi, @noop journaljournalProc. Phys.-Math. Soc. Jpn volume24, pages60 (year1942)NoStop[Brader and Evans(2002)]brader2002exactly authorauthorJ. M. Brader and authorR. Evans,@noop journaljournalPhysica Avolume306, pages287 (year2002)NoStop[Archer and Evans(2013)]archer2013relationship authorauthorA. J. Archer and authorR. Evans,@noop journaljournalJ. Chem. Phys.volume138, pages014502 (year2013)NoStop[Dijkstra et al.(1999)Dijkstra, Brader, and Evans]dijkstra1999phase authorauthorM. Dijkstra, authorJ. M. Brader,and authorR. Evans, @noop journaljournalJ. Phys.: Condens. Matter volume11, pages10079 (year1999)NoStop[Percus(1976)]percus1976equilibrium authorauthorJ. K. Percus, @noop journaljournalJ. Stat. Phys. volume15, pages505 (year1976)NoStop[Note3()]Note3 note Note that for λ =1, S_MFλ(k), introduced in Eq. (41) of Ref. @citealpnum archer2013relationship, is identical to S_RPA(k) defined here.Stop[Oettel(2005)]oettel2005integral authorauthorM. Oettel, @noop journaljournalJ. Phys.: Condens. Matter volume17, pages429 (year2005)NoStop[Rodgers and Weeks(2008)]rodgers2008local authorauthorJ. M. Rodgers and authorJ. D. Weeks, @noop journaljournalJ. Phys.: Condens. Matter volume20, pages494206 (year2008)NoStop[Remsing et al.(2016)Remsing, Liu, and Weeks]remsing2016long authorauthorR. C. Remsing, authorS. Liu,andauthorJ. D. Weeks, @noop journaljournalProc. Natl Acad. Sci. USAvolume113, pages2819 (year2016)NoStop[Ramakrishnan and Yussouff(1979)]ramakrishnan1979first authorauthorT. V. Ramakrishnan and authorM. Yussouff, @noop journaljournalPhys. Rev. B volume19, pages2775 (year1979)NoStop[Note4()]Note4 noteNote that minimizing the grand potential corresponding towithin the test particle procedure leads to the HNC closure for the uniform fluid, provided the direct correlation function is determined self-consistently using the OZ equation<cit.>NoStop[Evans et al.(1983)Evans, Tarazona, and Marconi]evans1983failure authorauthorR. Evans, authorP. Tarazona, and authorU. M. B. Marconi,@noop journaljournalMol. Phys.volume50, pages993 (year1983)NoStop[Evans and Marconi(1986)]evans1986comment authorauthorR. Evans and authorU. M. B. Marconi, @noop journaljournalPhys. Rev. A volume34, pages3504 (year1986)NoStop[Lebowitz and Percus(1966)]lebowitz1966mean authorauthorJ. L. Lebowitz and authorJ. K. Percus, @noop journaljournalPhys. Rev. volume144, pages251 (year1966)NoStop[Andersen et al.(1972)Andersen, Chandler, and Weeks]andersen1972roles authorauthorH. C. Andersen, authorD. Chandler,and authorJ. D. Weeks,@noop journaljournalJ. Chem. Phys.volume56, pages3812 (year1972)NoStop
http://arxiv.org/abs/1706.08744v1
{ "authors": [ "Andrew J. Archer", "Blesson Chacko", "Robert Evans" ], "categories": [ "cond-mat.soft", "cond-mat.stat-mech" ], "primary_category": "cond-mat.soft", "published": "20170627091952", "title": "The standard mean-field treatment of inter-particle attraction in classical DFT is better than one might expect" }
http://arxiv.org/abs/1706.08511v2
{ "authors": [ "Andreas Crivellin", "Dario Mueller", "Adrian Signer", "Yannick Ulrich" ], "categories": [ "hep-ph", "hep-ex" ], "primary_category": "hep-ph", "published": "20170626175959", "title": "Correlating Lepton Flavour (Universality) Violation in $B$ Decays with $μ\\to eγ$ using Leptoquarks" }
§ REFERENCESfancydefinition[1][Definition]#1 prop[1][Proposition]#1 Institute of Earth Surface Dynamics, Faculty of Geosciences and Environment, University of Lausanne, 1015 Lausanne, Switzerland. Email: [email protected] paper deals with the adaptation of a new measure for the unsupervised feature selection problems. The proposed measure is based on space filling concept and is called the coverage measure. This measure was used for judging the quality of an experimental space filling design. In the present work, the coverage measure is adapted for selecting the smallest informative subset of variables by reducing redundancy in data. This paper proposes a simple analogy to apply this measure. It is implemented in a filter algorithm for unsupervised feature selection problems. The proposed filter algorithm is robust with high dimensional data and can be implemented without extra parameters. Further, it is tested with simulated data and real world case studies including environmental data and hyperspectral image. Finally, the results are evaluated by using random forest algorithm. Unsupervised feature selection Coverage measureSpace filling Random forest Machine Learning§ INTRODUCTIONIn recent years, the techniques of collecting environmental data (such as: wind speed, permafrost, rainfall, pollution …) have been improved. Moreover, environmental phenomena are mostly: non-linear, multivariate, and in many cases they are studied in high dimensional feature spaces <cit.>. Usually, the input space is constructed by considering available information and expert knowledge. The empirically designed input feature space can gain rapidly a high dimension. In addition to the original features, there is always redundancy in the input data. In fact, the data points are not uniformly distributed in the experimental domain in which the data are embedded. In other word, the data space is not well filled or covered in the presence of redundancy. Consequently, the modelling of these data could take much time when introducing all features. Such problems are known as the curse of dimensionality.To overcome this issue, feature selection (FS) algorithms play an important role in data driven modelling. Therefore, numerous methods and measures for FS have been proposed <cit.>. The main purpose is to retain only features that bring new and relevant information by reducing the existing redundancy in data. This procedure helps to manage the curse of dimensionality. In fact, it improves the accuracy of modelling, speeds up the learning process, and offers a good interpretation of the results.The literature of machine learning distinguishes two well-known techniques of FS, according to the availability of the output variable: supervised and unsupervised feature selection <cit.>. These techniques try to find the smallest informative subset of features regarding to a defined measure or criterion.Other methods are available, such as feature ranking <cit.>, which consists in giving an order to features regarding their importance. Then, a learning process to choose how many features can be selected usually follows these methods.Several measures and criteria are used for selecting the smallest subset of features: measures based on entropy <cit.>, fractal dimension <cit.>, intrinsic dimension <cit.>, and also on distance <cit.>.In the unsupervised methods, the goal is mainly to carry out an exploratory analysis and to improve the discovering of hidden pattern. Therefore, the techniques of unsupervised feature selection (UFS) <cit.> do not require a prior information (output variable). They try to minimise existing redundancy, which leads to a reduction of dimensionality of data. Further, UFS techniques improve the understanding, the visualisation, and the interpretation of the results. In short, the dimensionality reduction consists in choosing a subset of features that contain new and relevant information about the data.This paper is an adaptation of a new measure based on space filling concept, which is called the coverage measure. It was mainly used in experimental designs <cit.>. Moreover, the proposed measure was used also for the construction of spatial coverage designs in <cit.>, which proposes its implementation in Splus. Other implementation forthe spatial coverage is available in the R libraryproposed in <cit.>. TheR library proposes an implementation of this measure, in the context of space filling design <cit.>.The coverage measure is adapted here for the UFS problems. It can be implemented in all search techniques such as exhaustive search <cit.> , sequential forward selection (SFS) <cit.>, and sequential backward selection <cit.> (SBS). In this work, it is considered with a SFS technique.The proposed measure computes how well the space is covered by the data points. In fact, it quantifies the uniformity of points in a hypercube by comparing the repartition of points to a regular grid <cit.>. The smallest coverage value means that the hypercube is well filled. Intuitively, the coverage measure gives zero value or near to zero if the data points are distributed as a regular grid, or near to be a regular grid, in the data space. The analogy is quite simple and clear, the selected features have to fill uniformly all the space in which the data are incorporated. In fact, the repartition of points expresses the information amount disseminated. Therefore, the smallest value of coverage means that the variables cover well the space in which they are embedded. Moreover, the selected variables should contain new and relevant information about the data. A filter algorithm is used to implement the coverage measure. It is applied on simulated dataset and on several well-known benchmark datasets used for feature selection purpose. In addition, real environmental data are used as well. Further, the algorithm is tested with different scenarios of noise injection and shuffling data. Then, the results are verified and evaluated with random forest algorithm <cit.> by using a consistent methodology. The remainder of this paper is organized as follows. Section 2 explains the coverage measure and its use in experimental designs. Section 3 presents the implementation of this measure for the UFS problems and introduces the corresponding filter algorithm. In section 4, the measure is evaluated by several datasets. In the last section, the conclusion is given with future developments.§ DEFINITION AND BASIC NOTIONSDesign and modelling of experiments have always been a fundamentalapproach over the years. The experimenter has to propose and choose the suitable factor space (i.e. experimental domain) for the experiment under study. The most important early step to check is the coverage or the uniformity of the proposed design. There are many ways to select the best design regarding several conditions and criteria <cit.>. Numerous space filling design have been proposed under some prior properties. They can be constructed by using algebraic methods: based on incomplete block resolvable design <cit.>, based on association schemes <cit.>. The construction algorithms were as well considered in <cit.>. Other high quality designs, based on space filling concept, were proposed in<cit.>. Furthermore, different measures for choosing the best design have been given in <cit.>. In the literature of sampling methods, one strategy is to generate randomly different designs. Then, a comparison is carried out using a defined measure to find the best design. Another approach can be on the extension of an existing design. The objective is to add more points in the sampling design by taking into account the prior defined measure.Other strategies in choosing the best design is to adopt some optimality criterion, such as: * The entropy criterion <cit.>, which has been widely used in coding theory and statistics. The Shannon entropy measures the amount of information contained in the distribution of a set of points. In <cit.> it is described as the classical idea of the information amount in an experiment. Moreover, it is proposed with a linear model (a simple Kriging model), and presents the corresponding maximum entropy designs. * The integrated mean squared error <cit.>, which is computationally more demanding and needs a powerful optimisation algorithm due to the large combinatorial design space. This criterion can be replaced by the maximum mean squared error involving a multidimensional optimisation <cit.>. * Minimax or Maximin distance criteria, proposed in <cit.>, which measure how well the experimental points are distributed through the experimental domain. A minimax distance minimises the maximum distance between points. Whereas the maximin distance maximises the minimum inter-site distance. A well-known maximin designs are the Placket-Burman designs where the number of points n= 4m+1 where m is a positive integerand presents the number of factors.Besides, several uniformity measures have been proposed in <cit.>. The most known is the discrepancy. Numerous kinds of discrepancies have been defined such as: the star discrepancy, the centred L_2-discrepancy, and wrap-around L_2-discrepancy. These uniformity criteria are based on the Kolmogorov-Smirnov test. In fact, it compares the design to a uniform distribution.In addition to the discrepancy, the coverage measure was also proposed to quantify the uniformity. In contrast to the discrepancy, the coverage measure compares the proposed design to a regular grid. Furthermore, the coverage measure is more stable than the discrepancy in a high dimensional design. Therefore, it can be applied to high dimensional data. §.§ Coverage measure Let X = { x^1, …,x^n}⊂ [0, 1]^dbe a sequence of n points of dimension d. The coverage measure is defined as follows:λ = 1/ϑ̅( 1/n∑^n_i=1 (ϑ_i- ϑ̅)^2)^1/2 where: ϑ_i = min_k ( dist ( x^i,x^k)) is the minimal distance between x^i and the other points of the sequence. And: ϑ̅ = 1/n∑^n_i=1ϑ_i is the mean of ϑ_i; wheredist is the euclidean distance. If the data points are distributed as a regular grid: ϑ_1=ϑ_2=…=ϑ_n=ϑ̅. Hence, λ=0The quality coverage of points can be detected by using the minimum euclidean distance between the points. Further, it takes into account the dispersion of distances. In fact, the coverage measure λ makes appear the coefficient of variation of the ϑ_i, which is known as the relative standard deviation (the ratio of the standard deviation to the mean of ϑ_i).The smaller the value of λ is, the smaller the distance between the points is. In this case, the design is near to be a regular grid. The best design should have the smallest coverage value λ. Figure <ref> shows the capability of the coverage measure to compute and quantify the filling of space. Therefore, the use of such measure helps to find the best experimental design regarding the distribution of points.From this point of view starts the adaptation idea of this measure for the unsupervised feature selection problems. Furthermore, the simplicity of this measure offers a good implementation in a filter algorithm for selecting variables. It is important to note that the results giving by this measure are acceptable regarding the selection of the informative feature subset. In addition, it can make use of a parallel CPU computing and a GPU computing to speed up the search procedure. § UNSUPERVISED FEATURE SELECTION BASED ON COVERAGENumerous techniques exist for implementing redundancy reduction measures. The SFS and SBS are the two commonly used techniques for this purpose. They give acceptable results comparing to the exhaustive search in a short time. The proposed measure can be implemented in any search technique.In the remainder of this paper the used search technique is the SFS. The implementation of the proposed measure is described in the following proposition. For all subsets of features, the coverage measure is computed (as it is defined in equation 1). The best subset has the smallest value, regardless to the used search technique.Since the present work is proposed with a SFS (see algorithm <ref>), the features are added step by step regarding the obtained coverage value. Figure<ref>shows clearly that the redundancy is easily detected by this measure, whether it is linear or non-linear redundancy. Besides, the UFS using the () algorithm takes into account the multivariate interactions between selected features. In addition, thealgorithm does not need extra parameters and does not need a fixed threshold. Therefore, the best subset is the one that gives the smallest coverage measure. Finally, Thealgorithm can be programmed easily in R and MATLAB software. § EXPERIMENTAL CASE STUDIES The simulated and the real world datasets presented in this section are commonly used in several papers on machine learning and feature selection. Moreover, several scenarios of noise injection and shuffling data are proposed to evaluate and to explore the limitation of thealgorithm.Further, this section discusses the quality of the obtained results. Finally, the results are verified and evaluated by using random forest algorithm. §.§ Simulated case study The simulated Butterfly dataset, introduced in <cit.>, is composed of 8features {X_1,X_2,J_3, J_4, J_5, I_6, I_7, I_8 }, where 3 {X_1, X_2,I_6 } are relevant and contain all the information of the dataset. The remaining 5 features are constructed basically from {X_1, X_2,I_6 } with linear and non-linear relations. In fact, these 5 features areredundant and do not bring new information. (See J. Golay et al. <cit.>). Figures <ref> show the results for the Butterfly datasets with different number of N points. The results show that thealgorithm finds easily the three important features {X_1, X_2, I_6 } regardless of the number of points used to generate the Butterfly dataset. The minimum value of the coverage measure is reached at the correct subset. §.§ Noise injectionThe robustness ofis evaluated against noise. In fact, several experiments of noise injection were performed for two different scenarios. The first one consists in injecting noise to all features of the Butterfly dataset. The second one consists to corrupt only the redundant features (J_3, J_4, J_5 , I_7, I_8). A Gaussian noise is used with a mean fixed at 0 and a standard deviation set at: 1%, 5%, 10%, 20%, and 50% of the original standard deviation of feature.The objective of these experiments is to see ifcan detect an existing redundancy in data corrupted by a Gaussian noise. Furthermore, it is important to find out the limitation of this algorithm against noise and at what level.Figure <ref> shows the two proposed scenarios of noise injection. Figure <ref>.b presents the reaction ofwith corrupted redundant features, at different levels of noise. The algorithm is still robust and detects the important features. However, at 50% of noise, the minimum value of the coverage is not indicating the correct subset of features, which is normal for such level of noise. On the other hand, the algorithm gives at least a correct ranking of features regarding the importance and the provided information of each feature (see table <ref>). Therefore, it can be concluded that thealgorithm is robust against noise. §.§ Shuffling features In addition to injecting noise in data, shuffling of features can be an interesting experiment to evaluate thealgorithm. This operation was carried out with two scenarios: at the beginning, only two redundant features are shuffled ( J_4, J_5). Then, three redundant features are shuffled. The results were expected, since the shuffling destroys the linear or non-linear relation between features. In fact, this can reduce the redundancy. As figure <ref> presents,selected features with relevant information (which are not redundant).§.§ Benchmark case studies Benchmark case studies <cit.> are also used to test thealgorithm. The datasets used in this work are: Parkinson, PageBlocks, Ionosphere,and COIL20 <cit.>. Table <ref> describes these datasets and the number of selected features for each dataset. §.§.§ Results and discussionsIn addition to applying thealgorithm on simulated and real world datasets, this subsection discusses the evaluation of the results. Here, random forest algorithm is used as a classifier for the four datasets used above (Parkinson, PageBlocks, Ionosphere,and COIL20).The used procedure of testing with random forest is applied once with all features of the datasets and once with only selected features. The procedure can be summarised as follows:* the data were split into training and testing sets (80% for training and 20% for testing);* the training set was used to find the optimal parameters of random forest (the number of trees and the number of predictors). Furthermore, the training step was performed by using a 10-fold cross-validation;* a random forest model was generated with the optimal parameters found above (previous step), and then applied to classify the testing set. Two classification evaluation metrics are used:* the overall accuracy of classification is computed with the following formula:OA= 1/n∑^n_i=1I(y_i = ŷ_i)where ŷ_i is the predicted class label for the ith observation using the random forest model. And I(y_i = ŷ_i) is an indicator variable with:I(y_i = ŷ_i)={[ 1;; 0 ].Therefore, the OA formula computes the fraction of correct classifications, which means that the best classification has the highest overall accuracy. * Cohen's Kappa coefficient <cit.> is also used to compare the classification results of random forest. The Kappa evaluation metric is computed on the test subset by using the following formula: k=n ∑_cT_c- ∑_cP_c/n^2- ∑_c G_c P_cwhere T_c indicates the number of correctly classified samples for class c; and n is the number of data points in the test subset. G_c and P_c are the size of samples for the class c and the samples classified in the same class c.During the evaluation, random forest algorithm was repeated 20 times. Tables <ref> and <ref> illustrate the obtained results with the overall accuracy and the Kappa coefficient respectively. The obtained results presented in tables <ref> and <ref> show that thealgorithm kept only the relevant informative features. In fact, it reduces the existing redundancy in data. Therefore, the proposed filter algorithm could be an interesting tool to minimise redundancy in data. §.§ Environmental case studiesThis section shows the potential of the proposed unsupervised feature selection algorithm on environmental data. In fact, the algorithm is applied on Permafrost data and the Indian Pines hyperspectral image.§.§.§ Permafrost case studyThe data were collected in the Alp Mountains of Switzerland. 26 features (excluding the XY coordinates) are used to predict Permafrost presence or absence. For more details on the study, including more complete references and more information about the collected features, see N. Deluigi et al. <cit.>. Figure <ref> presents the unsupervised feature selection results. The minimum of the coverage measure is reached at 17 features. Furthermore, the given result is evaluated by using random forest algorithm. Table <ref> shows the results of random forest, with all features and with only the selected features. The classification accuracy and the Kappa coefficient are shown in figure <ref>. In this figure, random forest is applied after each step of UfsCov algorithm.§.§.§ Indian Pines imageThe image was captured by Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor in the Northwest Indiana, on June 12, 1992.The Indian Pines scene contains agricultural and forested region (figure <ref> ). The data consist of145 x 145 pixels and 220 spectral bands with a spatial resolution of 20 m/pixel <cit.>.In this work only 200 bands are used for the experiments, after removing 20 noisy bands (104-108, 150-163, 220) due to water absorption. The present case study of hyperspectral image shows that thealgorithm is able to deal with remote sensing problems. Furthermore, the proposed algorithm help to manage high dimensional datasets (more than 100 features).Figure <ref> shows the results of the proposed algorithm. The minimum value is reached with the 69 features. Table <ref> compares the difference between the two random forest models, with all features and with only the selected features.§ CONCLUSION The research introduced a space filling measure for the unsupervised feature selection problems. A new filter algorithm considered is based on the coverage measure. The proposedalgorithm minimises redundancy in data. The proposed algorithm showed its efficiency by testing on simulated and real world case studies including environmental data. Random forest results confirm the potential of space filling concept in the unsupervised feature selection problems. Finally, thewas programmed in R language and will be available on the CRAN repository in thelibrary.Future development could be in the adaptation of new measures based on space filling concept for machine learning use and data mining. Furthermore, it could be important to propose algorithms with a parallel CPU computing version and GPU computing to speed up the execution time. § ACKNOWLEDGEMENTS This research was partly supported by the Swiss Government Excellence Scholarships for Foreign Scholars.The authors would like to thank Nicola Deluigi for providing us with the Permafrost dataset. They also would like to thank Micheal Leuenberger, Jean Golay, and Fabian Guignard for fruitful discussions about machine learning.elsarticle-num
http://arxiv.org/abs/1706.08894v1
{ "authors": [ "Mohamed Laib", "Mikhail Kanevski" ], "categories": [ "stat.ML", "cs.LG", "stat.ME" ], "primary_category": "stat.ML", "published": "20170627144839", "title": "Unsupervised Feature Selection Based on Space Filling Concept" }
Ana Brito [email protected] Centro Multidisciplinar de AstrofísicaInstituto Superior Técnico, Universidade Técnica de Lisboa Av. Rovisco Pais, 1049-001 Lisboa, Portugal Departamento de Matemática, Instituto Superior de GestãoAv. Marechal Craveiro Lopes, 1700-284, Lisboa, Portugal Centro Multidisciplinar de AstrofísicaInstituto Superior Técnico, Universidade Técnica de Lisboa Av. Rovisco Pais, 1049-001 Lisboa, Portugal We have analyzed the theoretical model envelopes of eight Kepler F-stars by computing the phase shift of the acoustic waves, α(ω), and its related function, β(ω). The latter is shown to be a powerful probe of the external stellar layers since it is particularly sensitive to the partial ionization zones located in these upper layers. We found that these theoretical envelopes can be organized into two groups, each of which is characterized by a distinct β(ω) shape that we show to reflect the differences related to the magnitudes of ionization processes. Since β(ω) can also be determined from the experimental frequencies, we compared our theoretical results with the observable β(ω).Using the function β(ω), and with the purpose of quantifying the magnitude of the ionization processes occurring in the outer layers of these stars, we define two indexes, Δβ_1 and Δβ_2. These indexes allow us to connect the microphysics of the interior of the star with macroscopic observable characteristics. Motivated by the distinct magnetic activity behaviors of F-stars, we studied the relation between the star's rotation period and these indexes. We found a trend, in the form of a power-law dependence, that favors the idea that ionization is acting as an underlying mechanism, which is crucial for understanding the relation between rotation and magnetism and even observational features such as the Kraft break. § INTRODUCTIONccCclcc Seismic and non-seismic observational constraints of the selected targets 7 1 0pt Star Id. (KIC) MK ⟨Δν⟩ (μHz) ν_max (μHz) T_eff (K) log g [Fe/H] (dex) A1 (8228742)F9IV-V 62.071^+0.022_-0.021 1190.5^+3.4_-3.7 6122 ± 77 4.03^+0.004_-0.005 -0.08 ± 0.10 A2 (6116048)F9IV-V 100.754^+0.017_-0.017 2126.9^+5.5_-5.0 5895 ± 70 4.19^+0.08_-0.08-0.26 ± 0.07 A3 (10454113) F9IV-V 105.063^+0.031_-0.0332357.2^+8.2_-9.1 6216 ± 68 4.46^+0.10_-0.10 0.00 ± 0.05 A4 (12009504) F9IV-V88.217^+0.026_-0.025 1865.6^+7.7_-6.2 6267 ± 71 4.37^+0.11_-0.11-0.03 ± 0.06 B1 (6679371)F5IV-V50.601^+0.029_-0.029 941.8^+5.1_-5.0 6344 ± 1313.92^+0.21_-0.21-0.10 ± 0.21 B2 (7103006)F8IV59.658^+0.029_-0.030 1167.9^+7.2_-6.9 6485 ± 86 4.50^+0.11_-0.11 0.19 ± 0.06 B3 (9139163)F8IV 81.170^+0.042_-0.036 1729.8^+6.2_-5.9 6577 ± 69 4.44^+0.10_-0.10 0.21 ± 0.06 B4 (9206432)F8IV84.926^+0.046_-0.051 1866.4^+10.3_-14.9 6772 ± 73 4.61^+0.11_-0.11 0.28 ± 0.06 ccCcccccc Parameters of the optimal models obtained for the stars of the sample with the code CESAM. 9 2 0pt Star Id. M/M_⊙ R/R_⊙ L/L_⊙ T_eff (K) Age (Gyr) Y_0 α r_bcz/R A1 1.241.4092.470 61073.802 0.248 1.65 0.803 A2 0.99 1.187 1.5795943 7.259 0.275 1.70 0.740 A3 1.20 1.271 2.4046383 3.782 0.253 2.30 0.775 A4 1.20 1.424 3.0366392 2.982 0.287 1.80 0.847 B1 1.552.195 6.98463382.182 0.249 1.50 0.929 B2 1.451.960 5.97964531.982 0.313 1.90 0.871 B3 1.401.589 4.31866071.582 0.308 2.00 0.879 B4 1.501.572 6.74167410.472 0.326 1.80 0.951 Recent advances in asteroseismology have been exceptional primarily due to space-based missions. With its high quality data sets, the Kepler mission has revolutionized the studies in this field <cit.>. In addition to the exceptional quality of the data, the quantity of data, now available in unprecedented numbers, is leading to new breakthroughsin stellar structure, stellar dynamics, and stellar evolution. The launch of the TESS mission <cit.> scheduled to take place within the next year, and also, in the not so far future, the launch of the PLATO mission <cit.>, should improve even further the quality and quantity of seismological data. These missions will provide oscillation frequencies for a large number of targets in many directions of the galaxy. Ensemble asteroseismology, the asteroseismology inside clusters of stars, allows for the probing scaling laws, for studying stellar activity, and even for testing theories of stellar evolution <cit.>.However, uncertainties in stellar physics exist, and they have an adverse impact on these very same theories. These uncertainties are exacerbated if we aim to give a physical description of the outermost layers of solar-type stars. Outer layers are poorly described by current stellar models. In these layers we have an intricate web of different processes occurring and interacting in the same region and on the same time scale: convection, magnetic fields, and pulsations. Beneath the stellar surface (a few percent of the star's radius) this interaction becomes even more complicated due to the partial ionization processes of chemical elements. All this activity contributes to a complex background structure that affects the frequencies of stellar oscillations in several ways. Furthermore, solar-like oscillations are excited and damped by the near-surface convection.It is therefore extremely important to probe the physical processes that are taking place in these more external layers of stars. Theseprocesses can be inferred by an appropriate seismic diagnostic like the α(ω) – acoustic phase shift.This quantity measures the phase shift experienced by acoustic waves in the surface layers of stars. Although itis known to be areliable observable for obtaining information about the physics of the external regions of the Sun <cit.>, here we apply the technique to other stars. In this paper we analyze and discuss the signature of this phase and its dependence on frequency, α(ω), for a group of eight Kepler solar-type stars. We also use, as a seismic probe of the external stellar layers, the derivative of α(ω). This is the seismic parameter β(ω),which can be computed from the frequencies of low-degree modes <cit.>.In the past this diagnostic method has yielded important results for the Sun. Namely, it enabled the measurement of the solar helium abundance by <cit.>, and also contributed to the calibration of the equation of state <cit.>. It is a powerful diagnostic tool, particularly suited to studying the external layers of solar-like stars.For the eight selected stars, we have computed the theoretical dependence of the phase shift on the frequency, α(ω). We also calculated the seismic observable β(ω) and compared it with the theoretical seismic parameter, β(ω). The theoretical β(ω) was obtained via two different processes: from the structural parameters of the envelopes of stellar models, and also from theoretical tables of frequencies. We found an overall good agreement between the theoretical β(ω) and the observational β(ω) for all the stars, particularly if we consider a frequency interval in the vicinity of the observed value of the frequency of maximum power. However, the theoretical seismic signatures reveal several distinct oscillatory behaviors that we know are related to the partial ionization processes. These behaviors allow us to split our set of eight models into two subsets of four models each, which we will designate subsets A and B. The subset A has a β(ω) signature that is more "Sun-like", whereas the subset B exhibits a β(ω) with characteristics somewhat different from the shape of a "sun-like" β(ω). The distinction is related to the intensity and location of ionization processes, which in turn are reflected in the oscillatory character of β(ω). Stars on the main-sequence are known to follow two distinct rotational regimes <cit.>. These rotational regimes are mainly determined by the effectiveness with which stars can lose their angular momentum from birth. Cool stars with T_eff<6200 K rotate slowly, with rotation periods larger than 10 days. Because of their thick convective envelopes, they lose angular momentum due to the presence of magnetic winds in their atmospheres, quicklylosing their initial rotational conditions. On the other hand, hot stars, with T_eff>6200 K, are rapid rotators. These stars are not efficient in the generation of magnetic winds because their convective envelopes are not thick enough. Hence, they do not lose significant amounts of angular momentum and remain rapidly rotating from birth. The sharp transition from slowly rotating main-sequence stars to rapidly rotating main-sequence stars is known as the Kraft break and occurs around approximately at 1.3 M_⊙ <cit.>. Stellar rotation period and the stellar activity cycle are also related <cit.>. The trend can be summed up in the formula P_cyc∝ P^q_rot with q=1.25 ± 0.5 <cit.>. A follow-up analysis revealed two related branches activity-wise: fast rotators with shortcycles and slow rotators with long cycles. Rotators with intermediate rates of rotation can display both kinds of magnetic behavior <cit.>. Based on these results, <cit.> suggested that the two branches illustrate the action of two dynamos in different regions of a star. In our chosen sample of eight Kepler stars, the two groups of stars with distinct rotation periods can be related to different ionization signatures occurring in the stellar envelopes. The group of theoretical models that exhibits, in accordance with our diagnostic, a more intense pattern of ionization processes in the envelope, represents the group of stars that are known to have shorter rotation periods. Conversely, the group of theoretical models where the magnitude of ionization processes is diminished, represents the group of stars with larger rotation periods. Therefore, ionization can be a powerful instrument to extend our knowledge about the rotation of solar-type stars, and also to test evolutionary theories. The paper is organized as follows. In Section 2 we describe the method: the astroseismological diagnostic β(ω). Section 3 introduces the targets, describes the models, and probes the envelopes of the theoretical models. The theoretical seismic parameter β(ω) is compared with the observable β(ω). In section 4 we introduce two indexes that measure the magnitude of ionization processes in the envelope and relate them with the rotation period of the star. Finally, Section 5 refers to the conclusions.§ PROPAGATION OF ACOUSTIC WAVES IN THE ATMOSPHERES OF STARS Oscillation frequencies, as is well-known, carry information about the internal regions of the stars. This information can be extracted and separated into two parts by an appropriate handling of the experimental data. One part of the information is asymptotic in nature and reflects the structure of stellar interiors. The asymptotic theory of non-radial acoustic oscillations allows us to solve inverse problems such as the distribution of the sound speed, density, and Brunt–Väisälä frequency in the interior of stars. The second part of the information concerns the regions of the stars where the hypothesis of asymptotic analysis is no longer valid. Accordingly, this part of the information reflects the properties of the outer layers of stars. Indeed, in these layers the wavelength of acoustic waves is smaller or of the same order as the variation of the background state. These regions are responsible for partial wave reflection and hence for the establishment of a resonant cavity. The information carried by the trapped internal waves can be extracted from observational data and takes the form of a frequency dependence of the phase shift α, which results from reflection. The outermost layers of a star are also the layers where a breakdown of the adiabaticity occurs and where we have inhomogeneities associated with convection and magnetic fields.The equations describing the non-radial adiabatic acoustic oscillations in the stellar surface <cit.> are obtained within two main approximations. One is the Cowling approximation, which neglects the Eulerian perturbation to the gravitational potential. The second takes into account the fact that for low-degree modes, all the trajectories of the sound waves trapped in the stellar interior are almost vertical. The radial component of the wavevector near the surface is much greater than the horizontal one, and this makes the oscillations to depend on the frequency alone. Hence, the outer phase shift is a function of frequency α = α(ω). This phase shift is then determined by the structure of the more external stellar layers where the reflection of internal acoustic waves occurs.The problem created by the fact that an asymptotic description is invalid in the upper layers of the stars can be eliminated by matching the asymptotic solutions in the stellar interior with the exact solution near the surface. This procedure leads to an equation for the eigenfrequencies that contain a frequency dependence on the phase shift. The dispersion relation of the stationary acoustic waves is given byF(w) = π/ωα(ω) + π/ω n ,where w = ω/L with L=l+1/2. Here, ω is the angular frequency and n and l are, respectively, the radial order and the degree of the mode.F(w) is determined by the radial distribution of the sound speed in the stellar interior, whereas α(ω) stands for the total phase shift of the reflected by the surface acoustic waves. This expression is known as the Duvall law <cit.> and provides a bridge between theory and observational data. §.§ The Phase Shift from a Stellar Envelope §.§.§ The Acoustic Potential On the surface, and because the contribution of gravity can be neglected (the Cowling approximation), the complete fourth-order system of differential equations describing the linear adiabatic acoustic oscillations can be reduced to a Schrödinger-type equation with a suitable choice of variables <cit.>d^2 ψ/dτ^2 + (ω^2 - U^2)ψ = 0 where τ represents the acoustic depth, defined asτ = ∫_r^R d r/candψ = r√(ρ c) ξwith ξ being the displacement in the radial direction. U^2 is the reflecting acoustic acoustic potential of the star and is given byU^2 = g/c( g/c - dln h/dτ) + [ 1/2d lnζ/ d τ]^2 - 1/2d^2 lnζ/dτ^2with ζ = r^2 h/cand h(r) = ρ^-1exp( -2 ∫_0^r g/c^2 dr ) .In the above expressions, g is the gravitational acceleration, c represents the adiabatic sound speed, and r is the stellar radius. A polytropic model of a given index (n=3) can be used as an approximation of the background of the solar and stellar envelopes. It revealed itself to be a suitable reference against which to compare a realistic theoretical model. In such a polytropic approximation the acoustic potential is inversely proportional to the square of the acoustic depth, U^2 ∝ 1/τ^2, and this stratification was used by <cit.> to determine the contributions of the acoustic potential to the seismic parameter β(ω) in the Sun.However, deviations from a polytropic stratification induce deviations in the profile of the sound speed and this motivates the analysis of the scattering processes occurring in the external layers of the stars to infer their structure.The solar acoustic potential has been computed and discussed many times in the past by several authors <cit.>. The shape of the solar acoustic potential allows the identification of the superadiabatic region (τ∼ 80 s), the zone of partial ionization of helium at τ∼ 600 s,and the base of the convective zone at τ∼ 2100 s <cit.>. §.§.§ The Phase α and Its Derivative: The Seismic Diagnostic β Equation <ref> can be conveniently solved by the phase-function method <cit.>. In this method the eigenfunction ψ takes the form ψ(τ, ω) = A (τ, ω) cos(π/4 + πα(ω, τ)- ωτ) ,and for each value of the frequency ω, the dependence α(ω) is determined as an asymptotic solution of the Cauchy problem for the phase equation d (πα)/d τ = U^2/ωcos^2 ( ωτ - π/4 - πα(ω, τ) )with sufficiently large values of τ. The boundary condition is imposed at r = R and is taken to coincide with the location of the temperature minimum <cit.>. Then, the phase α canbe computed numerically from τ = 0 to a τ_max, where the contribution of the variation of the background state becomes negligible. Usually, for the Sun and solar-type stars, τ_max is taken near the base of the convective zone.For a given stellar model β(ω) can be easily calculated from the corresponding acoustic potential. Indeed, following <cit.>, the differentiation of equation <ref> will yieldβ(ω) = - ω^2 d/dω( α/ω) .This function is sensitive to the outermost stellar layers. The contribution of the helium ionization zone, in the solar case, is known to produce a significant periodic component in β (ω) <cit.>.The simple procedure we just described to calculate the dependences α(ω) and β(ω) for a theoretical stellar model envelope offers an effective method for seismically probing the structure of the lower stellar atmospheres and the outer layers of the convection zone in solar-type stars.§.§ The Phase Shift from an Acoustic Oscillation Table of Frequencies: The Seismic Observable βBy fitting together the solutions obtained for the inner and outer regions of a star, we are establishing a relation between the phase shift frequency dependence and the structure of the outer reflecting layers. A dispersion relation like equation <ref> gives the possibility of solving an inverse problem. From a table of frequencies (ω, n, ℓ) it is possible to determine the acoustic phase β(ω). The process of taking the partial derivatives of equation <ref> with respect to n and ℓ yields not only equation <ref> but also β(ω) = ω - n ( ∂ω/∂ n) - L ( ∂ω/∂ L) /∂ω/∂ n ,with L = ℓ + 1/2. This equation offers the possibility of recovering from a given table of frequencies (an eigenvalue matrix {ω_ℓ, n}) the frequency dependence of the effective phase shift, which will carry valuable information on the physical structure of the stellar interiors.A virtue of this approach, based on an analysis of the phase shift of the scattering of the acoustic waves in the external regions of stars, is that it offers direct contact with the physical processes occurring in these layers. This is done through the acoustic potential of the star. Changes in the acoustic potential are reflected and magnified in the phase shift and in its derivative.§ PROBING THE OUTER LAYERS OF KEPLER F-STARS§.§ The Models We selected a total of eight Kepler stars of spectral class F to conduct our theoretical analysis of the outer layers. We searched for stars with 10 or more frequencies per degree ℓ, since the seismic diagnostic β (ω) benefits from a high number of frequencies being used in its computation. Only modes that were correctly detected and fitted were used. We performed a modeling of the eight selected stars using the evolutionary code CESAM <cit.>. This version of CESAM uses the solar mixture of <cit.>, OPAL 2001, as equation of state, and it uses the most recent OPAL opacities. The nuclear reaction rates were computed using the NACRE compilation <cit.>. Eddington's gray law was used to describe the atmosphere and convection was described by the mixing-length theory from <cit.> without overshoot. We also included the effects of diffusion according to the Burgers formalism <cit.>. For each star we compute a grid of models by varying the mixing-length parameter, the mass of the star, and the metallicityin ranges with physical meaning. For each set of parameters, an automatic search was performed to determine the age that simultaneously reproduces the values of the non-seismic constraints listed in table <ref>. The spectral type of each star was taken from the astronomical database SIMBAD <cit.> whereas the the seismic and non-seismic constraints were taken from<cit.>, and <cit.>. The age reached also reproduces the value of the mean large frequency separation that is inferred from scaling relations. Finally, the age is fine-tuned through a minimization of a reduced χ^2 to reproduce the observations as closely as possible, a method identical to that used by <cit.>. The theoretical mode frequencies were computed using the ADIPLS code <cit.>. The properties obtained for the models of our sample of eight Kepler stars are listed in table <ref>. The models differ by mass, radius, luminosity, age, and chemical composition, highlighting all the variety of stellar structures that occur even when confined to one spectral class.The left panel of figure <ref> shows an H–R diagram of our models where the luminosity is plotted against the effective temperature. The right panel of figure <ref> shows the variation with the cyclic frequency (ω=2πν) of the outer phase shift α(ν) for these stellar envelopes. This dependency was obtained numerically from equation <ref> for each of the stellar envelopes represented in table <ref>. Figure <ref> exhibits in an unequivocal manner the relation between α(ν) and T_eff. This is a natural relationship, since the phase shift is determined by the turning points of the acoustic waves and is hence linked with the stellar atmospheres where the effective temperature plays a significant role. Models in figure <ref> also show a well-known theoretical trend: stars with higher masses are more likely to have lower values of the phase shift values <cit.>.The dependence between the phase α and the effective temperature was analyzed by <cit.>.§.§ The Diversity of Acoustic Cavities Reflecting the Multiplicity of Ionization Profiles The acoustic potentials (equation <ref>) and the corresponding β(ν) signatures for all the eight envelopes of the theoretical models are shown in figure <ref>. The potentials have been drawn from r=R, the location of the temperature minimum, down to the region just before the base of the convective zone in each case. The main difference between the acoustic potentials relates to the partial ionization zones of chemical elements. One clearly notices the diversity associated with the region of the second ionization of helium. For the models of group B, the strong depression that characterizes the region of this ionization is much more pronounced than that in the group A models. Another dip that is possibly associated with the region of the first ionization of helium is clearly visible for the potentials of group B, but is virtually imperceptible for the potentials of group A. The characteristic sinusoidal behavior of β(ν) is related to the region of partial ionizations in the adiabatic convective zone.The effect of partial ionizations also leaves a characteristic signature in the first adiabatic exponent Γ_1 <cit.>. This signature takes the form of a well-known distinct bump. We display the first adiabatic index for each stellar model in figure <ref>.Figure <ref> and figure <ref> reflect the diversity associated with the partial ionization processes in the theoretical envelopes of the solar-type stars. This diversity relates to the locations of the ionization zones and also to the intensity of the ionizations themselves.In figures <ref> and <ref> we present a comparison of the theoretical seismic parameter with the seismic observable β(ν). Both are plotted against cyclic frequency ν.The theoretical seismic parameter β(ν) was computed using two different approaches: (1) it was obtained from the structural parameters of the stellar model envelopes, and also, (2) it was calculated from a theoretical table of frequencies. In this last case, and to not overload the figures, we presented β(ν) only for the modes with degrees l=0,1. Superimposed with the theoretical signatures are the observational β(ν). The observational frequencies and the corresponding uncertainties were taken from <cit.> and we have considered, in the calculations, modes with degrees from 0 to 2. Computations were performed over a range of frequencies such that, for each star, only modes correctly detected and fitted as described in <cit.> are included. Also, we present the seismic observable β(ν) only for modes with degrees l=0,1.Since the observational frequencies are only available for low-degree modes, they are influenced by the effect of the gravitational potential. On the other hand, the dependence of the phase shift on the frequencies when computed from the structural parameters of the theoretical model envelope is obtained in the Cowling approximation, which in turn neglects the perturbation of the gravitational potential. This is reflected in a well-known mismatch between β(ν) obtained from the acoustic potential and β(ν) obtained from a theoretical table of frequencies in the case of low-degree modes <cit.>. This mismatch is more clearly seen in certain models, suggesting that the gravitational potential does not influence the oscillation modes in a similar way for different stars.Comparing the results for β(ν) that were obtained for the theoretical frequencies and for observational frequencies, it is possible to argue that the differences are not of a purely systematic character. This might indicate that the source of the discrepancy may be related to the internal distribution of the sound speed, thus indicating, it has a thermodynamic nature. The resulting quasi-periodic behavior of the influence of helium partial ionization in some seismic parameters has been discussed by many authors with different methods <cit.>. In the particular case of the seismic parameter β(ν), the sinusoidal component is known to depend on the contribution of the partial ionization processes. From the comparison between theoretical and observational signatures (figures <ref> and <ref>), we note that the amplitudes of the oscillatory components of β(ν) are, in all the cases studied, underestimated by the theoretical models. This underestimation is enhanced in the cases treated in group B.For the Sun, the amplitudes of the quasi-periodic component can be related to the abundance of helium in the model <cit.>. Therefore, the underestimation of the amplitudes by theoretical models may indicate that the helium contents in theoretical models are far from the real helium contents of the star. Another possibility is that in the real star this periodic component is increased by ionizations of other chemical elements <cit.>. § THE SEISMIC DIAGNOSTIC Β AND THE STELLAR ROTATION PERIOD §.§ The Ionization Indexes Δβ_1 and Δβ_2 With the purpose of characterizing the sinusoidal behavior of the seismic parameter β(ν), which we know to be a signature of the partial ionization processes taking place in the model envelopes of solar-type stars, we introduce two indexes. These indexes aim to quantify the amplitude of the periodic component in the seismic parameter β(ν). Therefore, the indexes will act as indicators of the magnitude of partial ionization processes in the outer layers of the model of the star. We define them according to the following procedure. First, we locate the relative minimum, and the two relative maximums, closer to the location of the frequency of the maximum power ν_max. The value for ν_max is obtained from the scaling relation proposed by <cit.>. Then we measure the heights of these two relative maximums as illustrated in figure <ref>. These heights define the indexesΔβ_1=β(ν_max1)- β(ν_min)andΔβ_2=β(ν_max2)- β(ν_min) .Here, β(ν_min) represents the value of the relative minimum located closer to ν_max. β(ν_max1) and β(ν_max2) are the values of the two relative maximums closer to ν_max.The indexes Δβ_1 and Δβ_2 are represented in figure <ref> for all the signatures β(ν) that were computed from the theoretical model envelopes. In the same way it is possible to define these indexes also for the theoretical β(ν) computed from a table of frequencies.Moreover, these indexes can also be computed from the observable β(ν). In this case, they can be measured around the observational frequency of maximum power, as it is shown in figure <ref>. The observational values of Δβ_1 and Δβ_2 can then be compared to the predictions of the theoretical values of Δβ_1 and Δβ_2, revealing information about the microphysics of the outer layers of the star. §.§ Magnetic Activity, Rotation and Ionization The influence of magnetic activity on the properties of solar and stellar p-modes is well documented, e.g., <cit.>, <cit.>, and<cit.>. If magnetic fields can affect the frequencies of the p-modes, they probably can also affect the phase shift of the acoustic waves propagating in this region. Here, we use our results with the phase α(ν) and the diagnostic β(ν) to learn about a possible influence of magnetic activity on the phases α(ν) and β(ν). We use the length of the star's rotation period as an indicator of magnetic activity. It is known that rotation and magnetic activity appear to be related. This relation is not always straightforward, and it may even be quite complex and diverse. Fast rotators are generally more active, i.e., they display higher magnetic activity amplitudes than slow rotators. However, fast rotators are less likely to exhibit periodic magnetic behavior than slow rotators. <cit.>. On the other hand, stellar rotation is a crucial property for understanding stellar structure and stellar evolution, since the rotation period of a star is known to be a solid function of the mass and age. The two main rotational regimes of stars on the main-sequence were discovered long ago and they differ by the value of the effective temperature of the star <cit.>. Cool stars with T_eff<6200 K are usually slow rotators. They tend to have rotation periods larger than 10 days. These large rotation periods indicate that the star is loosing angular momentum after entering the main-sequence. The loss of angular momentum is thought to occur due to the action of magnetic winds in the thick convective envelopes of these stars <cit.>. The concept of magnetic breaking is used in this context, since the star experiences a spin-down of magnetic origin. This loss of angular momentum is such that stars born with very different rotation periods, after some time in the main-sequence, will all be rotating with similar periods. More precisely, it is possible to say that stars in this regime, at an age of approximately 0.5 Gyr, rotate without the influence of their initial conditions <cit.>. It is this behavior of the angular momentum loss that makes gyrochronology possible. Gyrochronology relates the rotation period of a star to the age of the star <cit.>. This relation is possible because, according to the description above, all stars of the same age (and mass) will rotate with the same period. The other rotational regime on the main-sequence is observed in stars with T_eff>6200 K. These are hot and rapid rotating stars with thin convective envelopes. The narrowness of the envelopes is thought to be related to the absence of magnetic breaking. This means that, for stars in this regime, the angular momentum loss is minimal and they remain rapidly rotating after entering the main-sequence. If they do not experience a strong spin-down, they are incompatible with gyrochronology <cit.>.As we just saw, the main aspects of the two rotational regimes in the main-sequence, are intimately connected with processes occurring in the convective envelopes of these stars. At the same time, the convective zones are also the stage for the partial ionizations of the chemical elements that compose the star. Therefore, it seems natural to look for a relation between the specific partial ionization signatures of each star and the rotational period of the star.Here, we propose to use the ionization indexes Δβ_1 and Δβ_2 to explore a possible connection between partial ionization processes occurring in the outer layers of solar-type stars and their rotation periods. To this end, we plotted the observational Δβ_1 and Δβ_2, against the rotation period of the stars. The results are shown in figure <ref>. The observational values of Δβ_1 and Δβ_2 in figure <ref> were obtained for the modes with the degree l=1. The use of l=1 modes is due to the fact that they usually have smaller error bars, and at the same time they suffer less from boundary effects during the process of computing numerical derivatives from central differences. Nevertheless, the results still hold for l=0 modes, or, even if we consider the mean value obtained for l=0 and l=1. Modes with degree l=2 have usually large error bars, so we did not include them in our study. We found that there is a relation, in the form of a power-law, between the rotation period of a star and the ionization indexes Δβ_i (i=1,2). Rotation periods were taken from <cit.> and <cit.>. The trend obtained with our sample of eight solar-type stars is compatible with the two rotational regimes described above, which are known to be abruptly separated by the Kraft break. This result suggests that partial ionization might be a key process for better understanding rotation in solar-type stars. Moreover, we also plottedthe theoretical ionization indexes Δβ_1 and Δβ_2 against the age and the mass of our models (figure <ref>).The correlations obtained support the previous result since, as is well-known, the rotational period is a function of the mass and age of the star. We think that these ionization indexes may reveal themselves as important diagnostic tools that can improve our knowledge about the structure, and even dynamics, of solar-type stars.§ CONCLUSIONSWe studied the outer layers of the theoretical models of eight Kepler F-stars. This was accomplished by using a seismic diagnostic based on the dependence of the surface phase shift of the frequencies, which arises from the reflection of the acoustic modes. Our results allowed us to split the models into two subgroups, A and B, according to the characteristics of the partial ionization zones of their chemical elements. The models of stars in group B showstronger ionization patterns when compared with the models of stars ingroup A. From the point of view of the magnetic activity, F-stars are known to be very interesting, with many of them exhibiting magnetic cycles <cit.>. Our results seem to be consistent with the paradigm that stars can be divided in two groups according to their level of magnetic activity. Stars represented by the models in the group B have shorter rotation periods, which is an evidence of shorter cycle periods, whereas stars represented by the group A have larger rotation periods and hence longer cycles. The distinction between the theoretical models of groups A and B, which is related to the characteristics of partial ionization processes, seems to suggest that the more ionized a star is, the higher its rotation period. This relation between ionization and rotation can be useful for further understanding the already established relation between rotation and magnetic activity. Indeed, in solar-type stars the magnetic activity is the result of a dynamo process, which in turn results from an interrelation between convection, rotation, and magnetic fields. Therefore, we would expect the presence of a stellar dynamo operating in the real Kepler stars used as targets of this study. The mechanisms of this dynamo should be similar to those in the Sun, with the magnetic field being generated in a thin layer (tachocline) located at the transition region near the base of the convective zone. As the luminosities of the stars increase, the base of the convective zonegets closer to the surface. For the models in group B, the base of the convective zone is located at a depth of 5-12% inside the star, whereas for the models in group A, its location is 15-27% of the radius. This means that, for stars similar to those represented by the group B, the magnetic field will be generated near the surface and very close to the ionization zones.To help with the description of the partial ionization processes in the theoretical envelopes of these solar-type stars, we introduced two seismic ionization indexes, Δβ_1 and Δβ_2. They measure the magnitudes of partial ionization processes in the outer layers of the stars. These indexes can be computed from the structural parameters of the theoretical model envelopes, from a theoretical table of frequencies, and also from the observational frequencies. By computing the observational values of Δβ_1 and Δβ_2, for the sample of the chosen F-stars, we uncovered a relation, in the form of a power law, between these indexes and the rotation period of the star. Stellar rotation is complex since it connects with many unclear aspects of stellar astrophysics. Though, at the same time, it is also a powerful diagnostic tool, since observational data about rotation periods are becoming abundant and of higher quality <cit.>. The recent disagreement between asteroseismic data from Kepler and the gyrochronologic relations <cit.>, and the subsequent finding that the stars that are close to the age of the Sun (and older) are not experiencing the magnetic breaking as they should be, according to the period–age relationships expected by gyrochronology <cit.>, is an example of the complex background that connects rotation, magnetism, and the physics of the interiors of the stars. The results uncovered in this work relating the ionization processes with the rotation period of stars strongly suggest that ionization is an important underlying mechanism for better understanding the relations between microphysics and the physics of rotation and magnetism.We thank to the anonymous referee for comments and suggestions that led to a more accurate and robust manuscript. We are also grateful to P. Morel for making available the CESAM code for stellar evolution, to Jordi Casanellas for the modified version of the same code, and to J. Christensen-Dalsgaard for his Aarhus adiabatic pulsation code (ADIPLS). This work was supported by grants from "Fundação para a Ciência e Tecnologia" (SFRH/BD/74463/2010).
http://arxiv.org/abs/1706.08630v1
{ "authors": [ "Ana Brito", "Ilídio Lopes" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170627002557", "title": "A Theoretical Study of the Outer Layers of Eight Kepler F-stars: The Relevance of Ionization Processes" }
bottom=1.05in,top=0.75in,left=0.63in,right=0.63in*lemma*Lemma theorTheorem factFact conj[theor]Conjecture
http://arxiv.org/abs/1706.08286v3
{ "authors": [ "Anurag Anshu", "Rahul Jain", "Naqueeb Ahmad Warsi" ], "categories": [ "quant-ph", "cs.IT", "math.IT" ], "primary_category": "quant-ph", "published": "20170626085221", "title": "A hypothesis testing approach for communication over entanglement assisted compound quantum channel" }
Training a Fully Convolutional Neural Network to Route Integrated CircuitsSambhav R. Jain Corresponding author: [email protected]  Indicates equal contribution.Oracle AmericaKye Okabe Stanford University Accepted 24/05/2017 ================================================================================================================================================================= We present a deep, fully convolutional neural network that learns to route a circuit layout `net' with appropriate choice of metal tracks and wire class combinations. Inputs to the network are the encoded layouts containing spatial location of pins to be routed. After 15 fully convolutional stages followed by a score comparator, the network outputs 8 layout layers (corresponding to 4 route layers, 3 via layers and an identity-mapped pin layer) which are then decoded to obtain the routed layouts. We formulate this as a binary segmentation problem on a per-pixel per-layer basis, where the network is trained to correctly classify pixels in each layout layer to be `on' or `off'. To demonstrate learnability of layout design rules, we train the network on a dataset of 50,000 train and 10,000 validation samples that we generate based on certain pre-defined layout constraints. Precision, recall and F_1 score metrics are used to track the training progress. Our network achieves F_1≈97% on the train set and F_1≈92% on the validation set. We use PyTorch for implementing our model. Code is made publicly available[Code: https://github.com/sjain-stanford/deep-route]. § INTRODUCTION Routing is a complex spatial optimization problem in the physical design of integrated circuits (ICs) that is known to be NP-complete in most cases <cit.>. The task is to optimally connect circuit segments spanning multiple layout hierarchies and multiple wire classes, while complying to a strict set of design rules dictated by the foundry's process design kit (PDK). The quality of routing determines circuit performance (frequency), reliability, and can also impact area. Depending on the type of circuit (ASIC / memory / processor), the routing objective may prioritize one over the other, but in general the expectation is to:-0.4em * Minimize path delay (resistance/capacitance)* Minimize congestion (wire crowding, hot spots)* Maximize routability (availability of tracks)* Maximize repeatability Formally, a circuit layout consists of several sub-blocks (or cells) with input/output ports (or pins). Connections between pins use channels (or tracks) in several metal layers that run orthogonal to one another, separated by insulating layers with connecting vias. If we treat cells as edges, then a node in a circuit graph corresponds to a net in the layout, which connects the driver's output pin with the input pin(s) of the receiver(s). Traditionally, when the circuit sizes were tractable, routing was primarily a manual task. However, in the past couple decades with exploding circuit sizes and more than a few billion transistors to be routed, the use of auto-routers is indispensable. Shrinking technology nodes and more stringent design rules further impact routability. Commercial electronic design automation (EDA) tools try to tackle this non-linear optimization problem using various algorithmic approaches <cit.> such as (i) exponential algorithms to exhaust the search space for a solution, (ii) heuristic algorithms, (iii) optimal algorithms for special cases of the problem, or (iv) approximation techniques. These approaches are iterative in nature and rely on a continually evolving/changing set of design rules. The routed layouts may further require significant manual effort and fine tuning to improve the non-optimal routes.We explore a learning-based approach wherein we train a deep, fully convolutional network (FCN) to route a layout net while relying on its ability to learn implicit design rules from the training data. In order to demonstrate learnability of layout design rules by the network, we pre-define a set of basic constraints (not specific to any technology node) that are embedded in the ground truth layouts generated for a single net case. The network is trained on our dataset of 50,000 train and 10,000 validation samples. Input data (containing pins) and labels (containing pins and routes) are first encoded on a binary basis per-pixel per-layer (detailed in Section <ref>), before feeding to the 15 stage FCN. FCN outputs go through a score comparator to give 8-layer encoded layouts (4 route layers, 3 via layers, and 1 pin layer). We decode this to 8-bit RGB for visualization. The various architectural choices of the FCN are covered in depth in Sections <ref> and <ref>. The network shows ability to learn (i) the identity-map for pins, (ii) optimal track locations for routes, (iii) suitable wire class combination, and (iv) via locations.§ RELATED WORK The task of producing routes using orthogonal layers of parallel channels and vias at intersections is often tackled heuristically, since an optimal solution does not exist for this NP-complete problem <cit.>. Previous work in this field <cit.> mostly rely on explicit rule-based algorithms to tackle parts of this complex task. For instance, Zajc <cit.> proposed using hierarchical algorithms for automatic routing of interconnects in two layer channels. Rivest <cit.> show a "greedy" heuristic channel router assuming all pins and wiring sit on a common grid. The other class of routers, which have received attention lately, are objective function based. Alpert <cit.> proposed a combination of minimum spanning tree (Prim) and shortest path tree (Dijkstra) objectives for performance driven global routing. Constructing routing tree with different objectives was also explored, such as timing objective <cit.>, buffer insertion <cit.> and wire sizing objective <cit.>, congestion and thermal objective <cit.>. A completely different objective for diagonal routing as opposed to the orthogonal (Manhattan) routing was also proposed <cit.>.One of the very first ideas of using neural networks for circuit routing came from Green and Noakes <cit.>, who divided the routing task into several stages and combined multiple small back propagation networks to form a complex neural system. They proposed to restrict the routing task to predefined areas and slide the context window to complete routes in sections. This divide and conquer approach helped reduce the complexity of their artificial neural network router. Our work takes inspiration from the recent advances in convolutional neural networks (CNN) <cit.> which are better suited for visual tasks as they preserve spatial information in the inputs. Similar to <cit.>, we fix the routing window to a predefined layout size, however, in contrast to <cit.>, we develop a single, end-to-end, deep network using convolutions, which holistically learns multiple design rules during training, and is able to route using different wire class combinations, depending on the spatial spread of the pins. To our best knowledge, this is the first attempt at routing a circuit layout net using a convolutional neural network.§ DATASETOverview. We develop our own dataset owing to the lack of a publicly available layout dataset, and the need to use simplistic design rules to assess learnability by the network (feasibility). In this section we discuss the design choices and constraints used when generating 50,000 layouts for training and 10,000 layouts for validation. Each layout sample contains both data (pins only) and labels (pins, routes, vias) for a single net (see Figure <ref>). The image is pixel-wise binary encoded in 8 layers of the layout, viz. [pin, M_3, Via_3, M_4, Via_4, M_5, Via_5, M_6]. So each pixel in a given layer is either '1' or '0', indicating the presence or absence of the layer at that spatial location. Our reasoning behind this encoding scheme is discussed in Section <ref>. Thus training data are stored as tensors of shape N × 1 × H × W, and labels as tensors of shape N × 8 × H × W, where H=W=32. The layouts were sized 32×32 pixels, which we found computationally feasible. We use chunked HDF5 <cit.> when storing and loading this large dataset to avoid memory bottlenecks.Design rules. These design rules are fundamental to traditional layout design and do not correspond to any specific technology node.-0.4em * M_2n tracks run horizontally.* M_2n+1 tracks run vertically.* M_n contacts M_n+1 through Via_n. Design choices. We made the following design choices to reduce the complexity of network implementation. These could be scaled as needed given sufficient resources.-0.4em * Maximum n_pins = 5.* Layout context window = 11μ m × 11 μ m.* Total allowed route layers = 4; total via layers = 3.* Routes are limited to two wire classes (branch-leg) (e.g. [M_3,M_4] or [M_4,M_5] or [M_5,M_6]).* Metal tracks use pixel grid (one track per pixel).* Pin layer is lower than all route layers, and routes need not drop vias down to the pin layer.Higher metals normally have less wire resistance per unit length compared to lower metals, due to their large cross sectional area and/or better material properties. However, higher metals also require additional vias to jump up/down to/from higher layers, which adds to the total resistance. As a result, there is a break-even route distance above which a higher metal is preferred. Moreover, for a given layout size, depending on wire performance data there are only a limited set of wire classes which make sense for routing. Using higher or lower wire classes than necessary would be non-optimal. In our case, we use empirical wire and via resistance data[Not released due to proprietary reasons] for four such wire classes (say M_3 to M_6). We select 11μ m as the size of the context window as it gave us a balanced dataset among the three wire class combinations, viz. [M_3,M_4], [M_4,M_5], [M_5,M_6]. By defining a context window for routing, we let the possibility open to have it slide over the entire layout to complete routing in segments, similar to <cit.>, but we leave the slide-and-route implementation to future work.Route algorithm. For each layout we first sample (random uniform) n_pins from {2,3,4,5} and (x, y) co-ordinates for each pin from {0-31} (since pixel grid is 32×32). For a given pin configuration, the direction with the largest spread of pins is chosen as dominant. The dominant direction uses a branch, while the non-dominant direction uses legs to connect individual pins to the branch. The choice of metals for branch and legs is done so as to optimize the combined wire and via resistance, as explained earlier. From Figure <ref>, we can qualitatively see that shorter routes use lower metals while longer routes use higher metals. We can also see that a branch is always assigned to the dominant direction, and legs to the other direction. For visualization, the routed layouts are decoded to 8-bit RGB as follows: M_3 as green, M_4 as red, M_5 as grey and M_6 as blue. § MODEL Overview. Our proposed model consists of a single, end-to-end, deep network using convolutions, which takes pin locations for a net as inputs and generates routes in one of the three wire class combinations, using layout design rules it learns from training data. We first present a binary scheme of encoding the input data in Section <ref>. The FCN model (see Figure <ref>) and its architectural details are covered in Section <ref>. Then in Sections <ref> and <ref> we describe the loss function and the details of training respectively. §.§ Binary Encoding Scheme In typical image generation problems such as generative adversarial nets (GAN) <cit.>, at the point of weight initialization, networks are intrinsically capable to generate an arbitrary color at any pixel coordinate. Subsequently through the course of training, meaningful color/coordinate combinations must be learned. The standard 8-bit RGB color gamut has 2^24 color representations (3 color channels, 8 bits each) which is significantly larger than what we need. Since our data only deals with 8 layout layers (4 route, 3 via, 1 pin) we can immediately set an upper bound on the representation combinations per pixel to 2^8. To take advantage of this insight, we choose to encode our data on a binary basis per-pixel per-layer. Hence instead of representing images as tensors of shape 3 × H × W with range: {0-255} (8-bit RGB), we encode our dataset as tensors of shape 8 × H × W with a binary range: {0, 1}. By doing so, we are able to formulate this as a layer-wise binary segmentation task, with cross entropy loss on the scores (16 × H × W) to let the network make a binary decision of the presence of each of the 8 layers at every pixel in the context window H × W. §.§ Network Architecture Figure <ref> illustrates our model with the activation volumes after each stage. The FCN has a total of 15 convolutional[Fully connected layers or excessively deep layers were avoided.] stages (see Section <ref> for further explanation on the choice of network depth). All convolutions except the last one are followed by batch normalization (BN) <cit.> and leaky rectified linear unit (leaky ReLU) <cit.>. The last convolution stage (L_15) outputs the scores, hence is not followed by BN and leaky ReLU as they would not affect the relative scores. Encoded inputs to the FCN are tensors of shape N × 1 × H × W, where N is the mini-batch size and H=W=32 (spatial dimensions). The first stage (L_1) uses 33×33 convolution with 16 filters, whereas stages L_2 through L_15 use 3×3 convolution with 16 filters. We believe the large receptive field at the head of the network allows a fast grasp of the overall pin locations and helps the network learn spatial information better, as will be explored in Section <ref>. Strides and padding for convolutions are such as to preserve the spatial dimension of the feature maps at each stage. Thus the activations after each stage are of shape N × 16 × 32 × 32, or more generally N × (α.β) × H × W where α is the number of layout layers to be learned, and β is the number of segmentation classes (two in our case). Activations from the last FCN stage (scores) are passed through the score comparator to pick the class with a higher score. Thus scores of shape N × (α.β) × H × W are reduced to layout maps of shape N ×α× H × W with each pixel being either `1' or `0' indicating the presence or absence of a layer at that spatial location. For visualization of the routes, we decode the score comparator outputs to 8-bit RGB. §.§ Loss Function To implement the network objective as a layer-wise binary segmentation task wherein every pixel in every layer is classified into one of the two classes (y=0 or y=1), we perform simple but critical reshape operations as follows. At train time, scores from the FCN (tensor) of shape N × (α.β) × H × W are first reshaped to a matrix of shape N.H.W.α×β. The corresponding labels (tensor) of shape N ×α× H × W are reshaped to a vector of length N.H.W.α. We then use averaged cross entropy loss (combination of negative log likelihood and softmax) over the predicted score matrix and label vector, to train our model (see Eq. <ref>).L = 1/N.H.W.α∑_N.H.W.αk_y {-loge^S_y/∑_βe^S_β}+ λ∑_n∑_α×β∑w^2_n_α×β Class imbalance. Since a majority of pixels in the input labels are background (y=0) with very few active pixels (y=1), we observe that the network quickly learns to classify all pixels as background and struggles to learn further. To mitigate such sparse learning difficulties due to class imbalance, we use weighted cross entropy loss with weights k_y = 1=3 and k_y = 0=1.Regularization. An L_2 regularization term was added to the loss to improve generalization. We sum over squared weights of 16 convolution filters (∑_α×β) from each stage (∑_n) and use a regularization coefficient λ = 1×10^-5. §.§ Training and optimization We train the model from scratch, starting with default weight initialization. We use Adam <cit.> with β_1=0.9, β_2=0.999, and ϵ=1×10^-8 to train the weights of the FCN components. We experiment with two different mini-batch sizes, viz. N=10 and N=100 with learning rates of 5×10^-5 and 5×10^-4 respectively (see Section <ref>). A mini-batch of 10 runs in approximately 90ms (∼7.5 min per epoch) and a mini-batch of 100 takes about 600ms (∼5 min per epoch) on a Tesla K80 GPU. We use PyTorch <cit.> to train and implement the network.§ EXPERIMENTSF_1 score as accuracy metric. Given the nature of our training data which is massively unbalanced towards one class (y=0), a raw metric comparing pixels of predicted and actual (ground truth) layouts will easily present an unreasonably high pixel-wise accuracy (∼98% in our case) even if the model incorrectly predicts all pixels to be background. We instead use the F_1 score metric that uses an equally weighted harmonic mean of precision and recall based confusion matrix. Precision is a measure of true positives among all pixels predicted positive. Recall is a measure of true positives among all pixels ground truth positive. Network depth and receptive field. We choose the FCN depth such as to have the overall receptive field of the network cover the entire input image (32×32 pixels). Assuming all stages use 3×3 unit-strided convolutions, 3 + 2 × (n_stages-1) > 32, we would need at least n_stages = 15 to allow the network to learn reasonably well. We observe that reducing the FCN depth below 15 makes it difficult for the model to perfectly overfit (F_1 = 100%) even a tiny dataset of 4 train samples. Note that our model uses 33×33 convolutions only in the first stage (see Figure <ref>).Significance of F=33 in the first FCN stage. As discussed in Section <ref>, we use 33×33 convolutional filters in the first FCN stage as we believe this large receptive field at the head of the network helps the model quickly learn the correspondence between spatial spread of pins in the input and crucial route decisions such as wire class combinations, track usage, and branch-leg assignment. To demonstrate the significance of this architecture, we compare two models differing only in their first FCN stages, viz. (i) F=3, and (ii) F=33. Rest of the model is unchanged. For this experiment we train the two models on a subset of our dataset (1000 train and 200 validation samples), using mini-batches of 10, learning rate of 5×10^-5 and regularization strength of 1×10^-5. From Figure <ref> we see that learning stagnates around F_1=35% after 200 epochs with F=3, whereas the model is able to quickly overfit the small dataset fairly well (F_1=85%) with F=33. Interestingly, the increasing gap between training and validation accuracies is due to the small dataset size used for this experiment, causing the model to overfit to it.Final training with different mini-batches. Taking inspiration from <cit.>, we conduct the final training using two different mini-batch sizes, viz. N=10 and N=100. The linear scaling rule <cit.> suggests adjusting the learning rate linearly as a function of mini-batch size. This makes intuitive sense because we make fewer iterative steps per epoch with a larger mini-batch, hence the step size (learning rate) needs to be proportionally larger. We set the learning rates to 5×10^-5 and 5×10^-4 for mini-batches of 10 and 100 respectively, which worked reasonably well in our hyperparameter tuning experiments. To overcome overfitting seen in Figure <ref>, we now use our complete dataset of 50,000 train and 10,000 validation samples for training. Figure <ref> shows the loss and accuracy curves (on both train and validation sets) for the two mini-batches. The overall trends look comparable. With mini-batch of 10 (left plot), the model achieves accuracies F_1≈90% and F_1≈82% on the train and validation sets after 200 epochs, and takes about 45 epochs to reach validation accuracy of 80%. In contrast, with mini-batch of 100 (right plot) the model achieves F_1≈97% and F_1≈92% on the train and validation sets after 200 epochs, and takes only 20 epochs to reach validation accuracy of 80%. Total train time significantly improved from 25 hours (∼7.5 min per epoch) with mini-batch of 10 to 16 hours (∼5 min per epoch) with mini-batch of 100, on a Tesla K80 GPU. The curves show good generalization of the model on the validation set with only a small gap (<5%) between training and validation accuracies. Also included are the precision and recall curves for the second case.§ RESULTSTo gain insight into the learning process, we show a routed net example from the validation set with two pins (see Figure <ref>). The left image shows the actual routed layout (ground truth), the center and right images show the predicted model outputs after 141 and 151 epochs of training respectively. We first notice that the model grasps orthogonality of adjacent metal layers, assigning M_3 (green) to vertical tracks and M_4 (red) to horizontal tracks only. Second, the model learns to connect different wire classes using vias at intersections. Third, the model learns to assign a branch to the dominant direction and legs to the non-dominant direction, however since the pins are roughly evenly spaced in either direction, the model attempts a vertical branch and horizontal legs. Eventually (after 10 epochs) it learns a more optimal way by using a horizontal branch and vertical legs to route, thus matching our ground truth expectation.Figure <ref> presents some randomly sampled examples from the training and validation sets, routed by our model after 191 epochs of training. As seen from the actual (ground truth) and predicted layouts, the network does well in (i) learning the identity mapping for pins, (ii) assigning vias to connect metals in adjacent layers, (iii) identifying the correct wire class combination from the overall pin configuration, and (iv) choosing the optimal track positions for branch and legs. It uses lower metals to route pins that are closer, and higher metals for widespread pins. In some cases, however, the routing is not perfect and the model misses connections or adds routes at undesired locations. We typically notice a higher error rate when layouts have more pins. This could likely be improved if we increase the ratio of training samples in the dataset containing more pins. While there is room for further improvement, the FCN model shows good overall ability to learn the layout design rules intrinsic to the dataset used for training. Future work. Due to the complexity involved in routing real IC layouts, there are several requirements that need to be addressed. A few worth mentioning are (1) routing multiple nets in the presence of previously occupied tracks, (2) using a finer grid of valid metal tracks customized per wire class, (3) training on more complex route configurations such as trunk-branch-leg, (4) adding driver / receiver awareness to pins, (5) adding dedicated pin layers for each wire class, (6) supporting routing of a bigger layout in segments using a sliding context window, (7) integrating timing models for timing-driven routing, (8) training on industry standard layouts after converting to our layer-encoded binary standard. Some of these requirements may be implemented by direct scaling of the model and/or training with dense datasets containing more design rules of interest. However, it is plausible that other requirements may warrant more sophisticated architectures, possibly combining several neural models to construct a complex neural system to route ICs. § CONCLUSIONInspired by the challenges facing circuit layout routing and optimization, and the recent advances in the field of convolutional neural networks, we introduced a unique approach to routing ICs using deep, fully convolutional networks. To explore learnability of layout design rules by our FCN model, we created our own dataset based on pre-defined layout constraints. We then implemented an encoding scheme to efficiently represent inputs to the model. The proposed FCN architecture efficiently learns to route a single net under set design constraints. Our model achieves good performance with training accuracy of F_1≈97% and validation accuracy of F_1≈92% within 200 epochs.§ ACKNOWLEDGEMENTSWe thank Nishith Khandwala and Wenbin Huang for helpful comments and discussion. We gratefully acknowledge CS231N staff (Stanford) and Google Cloud for the educational GPU credits used towards this work.ieee
http://arxiv.org/abs/1706.08948v2
{ "authors": [ "Sambhav R. Jain", "Kye Okabe" ], "categories": [ "cs.CV", "cs.AI", "cs.LG" ], "primary_category": "cs.CV", "published": "20170627172021", "title": "Training a Fully Convolutional Neural Network to Route Integrated Circuits" }
A simple method for shifting local dq impedance models to a global reference frame for stability analysis Atle Rygg, Marta Molinas, Eneko Unamuno, Chen Zhang and Xu Cai ... =========================================================================================================§ INTRODUCTIONThe boson discovered at the LHC <cit.> seems to be in very good agreement with prescription of a Standard Model like Higgs boson. In the Standard Model the Higgs mass is theonly free parameter in the theory and its precise determination was one of the main experimental targets <cit.>. Furthermore, the Standard Model predicts the shape of the Higgs potential, so a measurement of the parameters of the potential will allow us to discriminate a Standard Model Higgs boson from various BSM scenarios. This however requires the measurement of the Higgs self coupling, which can be measured in Higgs boson pair production processes. The value of the Higgs mass allows for measurements in a variety of decay channels and both ATLAS and CMS have performed studies to measure the Higgs self coupling, e.g. in the decay channels γγ bb̅ <cit.>,bb̅bb̅ <cit.>,γγ W W^*, bb̅W W^*, τ^+τ^-bb̅ <cit.>.From a Standard Model calculational point of view, the signal process (i.e. the production of a Higgs boson pair) is known at leading order in the full theory <cit.>, and invarious approximations taking higher order corrections into account <cit.>. Only very recently the NLO result taking full top mass dependence into account became available <cit.>.In this paper we focus on one possible decay channels, namely where one Higgs decays into a pair of photons, whereas thesecond decays into a pair of b-quarks. This process can be seen as a compromise between a four b-quark signal anda four photon signature. The first would benefit from a large H→ b b̅ branching ratio but suffers from a largeirreducible QCD background, whereas the latter exhibits a very clean signal with four photons, but suffers from a very smallH→γγ branching ratio.In the case of massless b-quarks the process γγ b b̅ can be seen as a subset of the process γγ jj whichis known at NLO in QCD <cit.>. The main motivation for the general two jet processwas however more to assess the background of a single Higgs in VBF production rather than focusing on final state b-quarks. As we will see, the tagging of two final state b-jets significantly alters the behavior of the higher order corrections and therefore this process cannot be directly compared to the general two jet process.The paper is organized as follows. In section <ref> we describe the setup that has been used to obtain the numerical results which we discuss in section <ref>. Finally we conclude in section <ref>.§ CALCULATIONAL SETUP TheNLO corrections are calculated by combining the two automated programs GoSam <cit.> for the generation and evaluation of the virtual one-loop amplitudes, and the Monte Carlo event generator Sherpa <cit.>. The combination between the two is realized using the standardized Binoth Les Houches Accord <cit.>.GoSam is based on an algebraic approach where d-dimensional integrands are generated using Feynman diagrams. It uses QGraf <cit.> and Form <cit.> for the diagram generation, and Spinney <cit.> and Form to write an optimized Fortran output. For the reduction of the tensor integrals we used Ninja <cit.>, which carries out the reduction on the integrand level in a fully automated way via Laurent expansion. Alternatively one can choose other reduction strategies such as OPP reduction method <cit.> which is implemented in Samurai <cit.> or methods based on tensor integral reduction as implemented in Golem95 <cit.>. For the evaluation of the remaining scalar integrals we have used OneLoop <cit.>.The evaluation of all tree-level like matrix elements within Sherpa has been performed using Comix <cit.>, the subtraction terms have been calculated with the Sherpa's implementation of the Catani-Seymour dipole formalism<cit.>.§ NUMERICAL RESULTS In the following we present numerical results for the LHC with a center of mass energy of √(s)=13 TeV.To assess b-mass effects the calculation has been carried out with both massless b-quarks in the 5 flavor scheme as well as with massive b-quarks in the 4 flavor scheme.§.§ Cuts and parameter settingsFor the massless case we have used the CT10nlo pdf set <cit.>and the CT10nlo_nf4 set for the massive case respectively. In the massive case the the b-mass has been set to 4.7 GeV. Renormalization- and factorization scales are set to be equal and the central scale was chosen to beμ_R = μ_F =1/2√(m^2_γγ +( ∑_i p_T,i)^2) ,where the sum runs over the final state partons. As this process contains external photons the electroweak coupling constant α is set to α = 1.0/137.03599976. We have included top-quark loops in the virtual corrections with a top mass of m_t = 171.2 GeV.The presence of final state photons requires the application of a photon isolation criterion to render theNLO corrections finite. We employed a smooth cone isolation criterion <cit.> with the following parameters:R=0.4,ϵ=0.05, n=1 .Additionally the isolated photons are required to fulfillp_T,γ > 30 GeV, |η_γ|<2.5 .The QCD partons are clustered with an anti-k_T algorithm <cit.> contained in the Fastjet package <cit.>.The jet radius has been set to R=0.4 and events where both b-quarks are clustered into a jet are rejected in orderto ensure that there are at least two b-jets present in the final state. For the jets we require p_T,j > 20 GeV,|y_j|<4.4 .§.§ Cross sections and differential distributions We start the discussion of the numerical results with the case of a massless b-quark. We assess the theoretical uncertainty byusual scale variation of a factor of two around the central scale. Based on the cuts and settings described above we find for thetotal cross section σ_LO = 38.6^+22%_-17 % fb,σ_NLO = 56.2^+20%_-15% .From Eq. <ref> one can see that the NLO corrections enhance the total cross section by almost fifty per cent. It also shows that the theoretical uncertainty does not improve at NLO, instead for both LO and NLO one obtains an uncertainty of15-20% in each direction when varying the scale by a factor of two. This situation is shown more explicitly in Fig. <ref> where we show the cross section as a function of the scale for a broader range.Looking at the curve for the inclusive NLO result one sees a born-like behavior even at NLO. In particular no reductionon the scale dependence is obtained throughout the whole range of scales. The typical turnover that one expects at NLO isnot present.A special feature of this process is that the leading order process is mediated by two types of initial state, the qq̅- and the gg-channel. At NLO however also the quark-gluon channel is opening up in the real emission. In order to investigate whetherit is this channel that is responsible for the tree-level like behavior we made two different checks. First, we completely remove the quark-gluon channel from the process (green curve). This has a tremendous impact on the NLO result rendering thecorrections negative over the whole range and the absolute value increases when going to smaller scales which even leads to unphysical negative cross sections for scales smaller than the central scale. Removing a productionchannel is of course not a physical meaningful procedure but it shows that this channel is indeed responsible for thebehavior of the inclusive NLO cross section. A physically well defined strategy however is to impose a jet veto on a possiblethird jet. A jet veto effectively cuts away an intrinsically positive contribution from the real emission and will therefore lead to a decrease of the NLO result. The exclusive NLO result is given by the turquois curve. Interestingly the two approaches lead to very similar results. Even though vetoing a jet is a well defined procedure it leads to negative cross sections forscales smaller than the central scale. This indicates that the central scale could be chosen to be larger although it has been proven to be a good choice for the general diphoton plus two jets process <cit.>. It is clear that imposing a jet veto raises the question to what extent possible resummation effects can change the result and the associated theoretical uncertainty. This is however beyond the scope of this paper. The results show that this process is highlysensitive to an additional jet veto and that the scale variation might therefore not be an accurate measure of thetheoretical uncertainty. The inclusive NLO result seems however suitable as a conservative estimation.We now turn to the discussion of the differential distributions. Fig. <ref> shows the p_T distribution of the two leading b-jets, where the jets are p_T-ordered. For both jets the NLO corrections for low values of p_T are relatively small which means that the NLO result agrees with theleading order result within the systemic uncertainty.Also the size of the NLO uncertainty is reduced compared to the leading order uncertainty. Going higher in p_T however very rapidly increases the NLO corrections and from the order of100 GeV on the differential k-factors are in the range of 2-2.5. Also the size of the NLO uncertainty band increases and for values beyond ∼ 100 GeV the uncertainties at NLO are roughly twice as big as the LO ones. For the transverse momentum distribution of the photons shown in Fig. <ref> the behavior is less pronounced than for the jets. For the leading photon the corrections are smallest for low values of p_T and rise almost linearly with increasingtransverse momentum leading to a k-factor of almost two for values around 500 GeV. Similar to the jet distributionsthere is basically no overlap between the uncertainty bands. The subleading photon shows a milder behavior compared tothe leading photon. Although the uncertainty bands also hardly overlap the differential k-factor is flat to a good approximation. For both photons one sees that the size of the NLO uncertainty is roughly of the same size as the LO uncertainties whereasfor the jets the NLO uncertainties were larger except for small values of p_T.As this process constitutes a background to double Higgs production, the invariant mass distributions are also essential.In the upper row of Fig. <ref> we show the invariant masses of the two leading b- jets and of the two photons.In both cases oneobserves a significant shape distortion by the NLO corrections. They exhibit large corrections at low values followed by a minimum in the range of 60-80 GeV. In the case of the jets the NLO corrections then increase roughly linearly again, leading to substantial corrections for invariant masses beyond say 200 GeV. For the photons this behavior is mitigated and thedifferential k-factor is flat to a good approximation in the mass range beyond 200 GeV. It is worth noting that in the range around the Higgs mass the corrections are rather mild and one still finds an overlap between the uncertainty bands. And inparticular the NLO behavior for low invariant masses allows to reduce the NLO corrections by imposing an appropriate cutaround the Higgs mass. The plot in the lower row of Fig. <ref> shows the total invariant mass of the final state where the sum runs over the two photons and the jets. There we see a drastic change in the shape of the distribution when going from leading order to next-to-leading order.At low invariant masses the NLO corrections are negativ and substantial but then increase linearly and lead to substantial positive corrections in the region above ∼400 GeV. With the additional quark-gluon channel in the real radiation it is not surprising that the kinematics of the process changes compared to theleading order behavior, and this observable, being very inclusive in the final state probes the underlying kinematics of the process.One can expect also differences between signal and background in various angular distributions as in the case of the signal the b-jets and the photons stem from the decay of a Spin-0 particle, whereas for the background processes the angular correlations are different. Fig. <ref> shows the R-separation between the two leading jets and the two photonsrespectively. Both distributions exhibit large corrections for small values for the separation with a minimum around π.A similar behavior is also found for the azimuthal angle which is shown in Fig. <ref> forthe leading jets (l.h.s) and the two photons (r.h.s). Also there one finds for both the jets and the photons the largest contributions for small angles followed by a constant decrease. For Δϕ≈π the NLO result agrees with the LO result within the theoretical uncertainty.§.§ Massive b-quarksIn this section we will scrutinize the validity of treating the b-quark as a massless particle. For this purpose we set the mass of the b-quark to its pole mass of 4.7 GeV. For a consistent treatment we employ the four flavor scheme and usethe CT10nlo_nf4 pdf set. Table <ref> shows the total cross sections for the central scaleat LO and NLO for massivequarks in direct comparison to the massless results. The massive LO order result is reduced by ∼10%, at NLOthe massive result is ∼16% smaller than the massless result. This reduces the k-factor by ∼7 %. At first it may seem unreasonable that the introduction of the b-mass does have such an influence on the result given that themass is relatively small compared to all other scales in this process. However one should keep in mind that there are severaleffects that need to be taken into account. The biggest effect certainly comes from the change of the pdf set that comes along with a lower value of α_s. α_s(M_Z) is ∼ 4.5 % smaller in the massive case. This effect is the driving force in the reduction of the cross section. In addition,for this process the subprocesses with initial state b-quarks are enhanced due to t-channel like diagrams with the b-quark line going from initial to final state. These type of diagrams yield a large contribution that enhances the importance of initial state b-quarks compared to the other sea-quark contributions. This effect has also been observed in the context of multiple b-quark production <cit.> and also there the overall effect has been found to be large <cit.>.From comparing LO order results within the 4 flavor scheme for the massless and the massive case we estimate the pure mass effect to contribute to ∼ 40 % to the reduction of the cross section.It is now also important to investigate if and how big the massive b-quark will affect differential distributions. For simplicity we present the massive results only for the central scale. The focus here is on the change of the shape caused by the masseffects and we assume that the theoretical uncertainty will be of a very similar size as for the massless case.In Fig. <ref> we show the transverse momentum distribution for the two leading jets. The upper ratio plot shows the ratio of the massive LO contribution over the massless result, the lower ratio plot shows the same for the NLO result. For comparison we also show the scale uncertainty for the massless case.The mass effects are dominated by the general decrease of the cross section in the massive case, but the differential k-factor is flat to a quite good approximation and the central scale of the massive result is still within the uncertainty band of the massless result except for the first bin where the uncertainty band becomes smaller.One can therefore conclude that the uncertainty from setting the mass to a non-zero value is contained within the systematic uncertainty from scale variation.The transverse momentum distribution of the two photons shows exactly the same behavior as can be seen in Fig. <ref>. Also here the differential k-factor is flat to a good approximation and the massive result is still in agreement within the uncertainty of the massless result. The same is also true for the invariant masses of the two leading jets and the two photons as can be seen in Fig. <ref>. Also here the massive result can be incorporated in the systematic uncertainty of the massless calculation. For the total invariant mass shown in the lower row of Fig. <ref> the situation is a bit more special. Also here the mass leads to a flat shift downwards, but the ratio plots shows that for the NLO result the error band becomes very small in the region between 250-300 GeV. The reason for this behavior is that the upper and the lower scale cross the central scale in that region which makes the scale uncertainty vanish and leaving the massive result outside the estimated uncertainty. This might also be interpreted such that for this observable our scale choice is not suitable to describe thisparticular observable and give a reliable estimation of the underlying uncertainties. In general on would expect that if the introduction of a massive b-quark yields to a shape distortion compared to the massless case, then this should preferably show up in distributions that separate regions of low and high energy / transverse momentum, such that there are regions where the b-mass becomes large compared to the other scales in the process.Distributions like the transverse momenta of the b-jets or the invariant mass of the dijet system seem to be the ideal candidates. However as we have seen above, even in these distributions we do not observe a significant shape distortion and the effects of the b-mass are essentially reduced to a global shift induced by the different value of α_s. It is therefore not surprising that also in angular distributions we do not observe a different pattern. We exemplify this by showing the angular separation between the two leading jets and the two photons in Fig. <ref>.As for the transverse momentum distribution of the jets one observes a small effect in the first bin where the massive result is slightly below the uncertainty band of the massless result but also here the differential k-factor is flat over the whole range. For the separation between the photons the situation is very similar with the massive result being at the lower end of theuncertainty band with an otherwise flat k-factor.In summary, the inclusion of the b-mass has a substantial effect on the total cross section and on differential distributions. However it leads just to a global shift towards smaller values largely caused by the 4 flavor pdf set and the smaller value for α_s.But it does not lead to significant distortions of shapes of the differential distributions. A shift via a globalk-factor would therefore be able to accurately describe the mass effect.§ CONCLUSIONS The measurement of the triple Higgs coupling is an essential ingredient to completely determine the structure of theHiggs potential and to answer the question whether the Higgs boson is in agreement with the prediction from the Standard Model. The production of two Higgs bosons via gluon fusion yields the biggest contribution that includes the triple Higgs vertex.In this paper we investigated the background of one of the most import decay channels, where one Higgs would decay into a bb̅ pair and the other Higgs would decay into a pair of photons. We calculatedthe O(α_s^2 α^2) contribution at next-to-leading order in QCD in the fully automated Sherpa + GoSam setup. We found large corrections due to new partonic channels opening up for the real emission contribution leading to a tree-level like behavior of the cross section under variation of renormalization- and factorization scale. The inclusion of NLO effects is therefore viable for a reliable theoretical prediction.We also assessed the impact of a massive bottom quark. In a consistent treatment the inclusion of the mass comes along with a 4 flavor scheme pdf set and therefore also the removal of subprocesses with initial state b-quarks. Altogether we found a significant reduction of the cross section which however is largely caused by the pdf set and the smaller value of α_s. The actual mass only plays a minor role.The massive result is still contained within the systematic uncertainty of the massless one and the shapes of the differential distributions are unchanged to a good approximation. This means that the mass effects can effectively be described by applying a global k-factor to the massless results.§ ACKNOWLEDGEMENTSWe would like to thank Marek Schoenherr for his help with Sherpa andThomas Gehrmann and Gudrun Heinrich for various useful discussions. NG was supported by the Swiss National Science Foundation under contract PZ00P2_154829. 100Aad:2012tfa ATLAS Collaboration, G. Aad et al., Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC,Phys. Lett. B716 (2012) 1–29, [http://arxiv.org/abs/1207.7214arXiv:1207.7214].Chatrchyan:2012ufa CMS Collaboration, S. Chatrchyan et al., Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC,Phys.Lett. B716 (2012) 30–61, [http://arxiv.org/abs/1207.7235arXiv:1207.7235].Aad:2015zhl ATLAS, CMS Collaboration, G. Aad et al., Combined Measurement of the Higgs Boson Mass in pp Collisions at √(s)=7 and 8 TeV with the ATLAS and CMS Experiments,Phys. Rev. Lett. 114 (2015) 191803, [http://arxiv.org/abs/1503.07589arXiv:1503.07589]. ATLAS-CONF-2016-004 ATLAS Collaboration, Search for Higgs boson pair production in the bb̅γγ final state using pp collision data at √(s)=13 TeV with the ATLAS detector, ATLAS-CONF-2016-004, CERN, Geneva, Mar, 2016.Khachatryan:2016sey CMS Collaboration, V. Khachatryan et al., Search for two Higgs bosons in final states containing two photons and two bottom quarks, http://arxiv.org/abs/1603.06896arXiv:1603.06896.Aad:2015xja ATLAS Collaboration, G. Aad et al., Searches for Higgs boson pair production in the hh→ bbττ, γγ WW^*, γγ bb, bbbb channels with the ATLAS detector,Phys. Rev. D92 (2015) 092004, [http://arxiv.org/abs/1509.04670arXiv:1509.04670].Aad:2014yja ATLAS Collaboration, G. Aad et al., Search For Higgs Boson Pair Production in the γγ bb̅ Final State using pp Collision Data at √(s)=8 TeV from the ATLAS Detector,Phys. Rev. Lett. 114 (2015), no. 8 081802, [http://arxiv.org/abs/1406.5053arXiv:1406.5053].Aaboud:2016xco ATLAS Collaboration, M. Aaboud et al., Search for pair production of Higgs bosons in the bb̅bb̅ final state using proton–proton collisions at √(s) = 13 TeV with the ATLAS detector, http://arxiv.org/abs/1606.04782arXiv:1606.04782.CMS:2016tlj CMS Collaboration, Search for resonant pair production of Higgs bosons decaying to two bottom quark-antiquark pairs in proton-proton collisions at 13 TeV, CMS-PAS-HIG-16-002, 2016.Khachatryan:2015yea CMS Collaboration, V. Khachatryan et al., Search for resonant pair production of Higgs bosons decaying to two bottom quark antiquark pairs in proton–proton collisions at 8 TeV,Phys. Lett. B749 (2015) 560–582, [http://arxiv.org/abs/1503.04114arXiv:1503.04114].Aad:2015uka ATLAS Collaboration, G. Aad et al., Search for Higgs boson pair production in the bb̅bb̅ final state from pp collisions at √(s) = 8 TeVwith the ATLAS detector,Eur. Phys. J. C75 (2015), no. 9 412, [http://arxiv.org/abs/1506.00285arXiv:1506.00285].ATLAS:2016qmt ATLAS Collaboration, Search for Higgs boson pair production in the final state of γγ WW^*(→ lν jj) using 13.3 fb^-1 of pp collision data recorded at √(s)= 13 TeV with the ATLAS detector, ATLAS-CONF-2016-071, 2016.CMS:2016cdj CMS Collaboration, Search for Higgs boson pair production in the bblνlν final state at √(s) = 13 TeV, CMS-PAS-HIG-16-024, 2016.CMS:2016ymn CMS Collaboration, Search for non-resonant Higgs boson pair production in the bbtautau final state using 2016 data, CMS-PAS-HIG-16-028, 2016.CMS:2016rec CMS Collaboration, Search for resonant Higgs boson pair production in the bblνlν final state at √(s) = 13 TeV, CMS-PAS-HIG-16-011, 2016.CMS-PAS-HIG-16-013 CMS Collaboration, Search for resonant Higgs boson pair production in the bbτ^+τ^- final state, CMS-PAS-HIG-16-013, CERN, Geneva, 2016.CMS:2016ugf CMS Collaboration, C. Collaboration, Search for non-resonant Higgs boson pair production in the bbτ^+τ^- final state, CMS-PAS-HIG-16-012, 2016.CMS:2016zxv CMS Collaboration, Model independent search for Higgs boson pair production in the bbτ^+τ^- final state,Tech. Rep. CMS-PAS-HIG-15-013, 2016. Eboli:1987dy O. J. P. Eboli, G. C. Marques, S. F. Novaes, and A. A. Natale, Twin Higgs Boson Production,Phys. Lett. B197 (1987) 269.Glover:1987nx E. W. N. Glover and J. J. van der Bij, Higgs Boson Pair Production via Gluon Fusion,Nucl. Phys. B309 (1988) 282.Plehn:1996wb T. Plehn, M. Spira, and P. M. Zerwas, Pair production of neutral Higgs particles in gluon-gluon collisions,Nucl. Phys. B479 (1996) 46–64, [http://arxiv.org/abs/hep-ph/9603205hep-ph/9603205]. [Erratum: Nucl. Phys.B531,655(1998)].Dawson:1998py S. Dawson, S. Dittmaier, and M. Spira, Neutral Higgs boson pair production at hadron colliders: QCD corrections,Phys. Rev. D58 (1998) 115012, [http://arxiv.org/abs/hep-ph/9805244hep-ph/9805244].Maltoni:2014eza F. Maltoni, E. Vryonidou, and M. Zaro, Top-quark mass effects in double and triple Higgs production in gluon-gluon fusion at NLO,JHEP 11 (2014) 079, [http://arxiv.org/abs/1408.6542arXiv:1408.6542].Grigo:2013rya J. Grigo, J. Hoff, K. Melnikov, and M. Steinhauser, On the Higgs boson pair production at the LHC,Nucl. Phys. B875 (2013) 1–17, [http://arxiv.org/abs/1305.7340arXiv:1305.7340].Grigo:2014jma J. Grigo, K. Melnikov, and M. Steinhauser, Virtual corrections to Higgs boson pair production in the large top quark mass limit,Nucl. Phys. B888 (2014) 17–29, [http://arxiv.org/abs/1408.2422arXiv:1408.2422].Grigo:2015dia J. Grigo, J. Hoff, and M. Steinhauser, Higgs boson pair production: top quark mass effects at NLO and NNLO,Nucl. Phys. B900 (2015) 412, [http://arxiv.org/abs/1508.00909arXiv:1508.00909].Degrassi:2016vss G. Degrassi, P. P. Giardino, and R. Gröber, On the two-loop virtual QCD corrections to Higgs boson pair production in the Standard Model,Eur. Phys. J. C76 (2016), no. 7 411, [http://arxiv.org/abs/1603.00385arXiv:1603.00385].deFlorian:2013uza D. de Florian and J. Mazzitelli, Two-loop virtual corrections to Higgs pair production,Phys. Lett. B724 (2013) 306–309, [http://arxiv.org/abs/1305.5206arXiv:1305.5206].deFlorian:2013jea D. de Florian and J. Mazzitelli, Higgs Boson Pair Production at Next-to-Next-to-Leading Order in QCD,Phys. Rev. Lett. 111 (2013) 201801, [http://arxiv.org/abs/1309.6594arXiv:1309.6594].Shao:2013bz D. Y. Shao, C. S. Li, H. T. Li, and J. Wang, Threshold resummation effects in Higgs boson pair production at the LHC,JHEP 07 (2013) 169, [http://arxiv.org/abs/1301.1245arXiv:1301.1245].deFlorian:2015moa D. de Florian and J. Mazzitelli, Higgs pair production at next-to-next-to-leading logarithmic accuracy at the LHC,JHEP 09 (2015) 053, [http://arxiv.org/abs/1505.07122arXiv:1505.07122].deFlorian:2016uhr D. de Florian, M. Grazzini, C. Hanga, S. Kallweit, J. M. Lindert, P. Maierhöfer, J. Mazzitelli, and D. Rathlev, Differential Higgs Boson Pair Production at Next-to-Next-to-Leading Order in QCD, http://arxiv.org/abs/1606.09519arXiv:1606.09519.Borowka:2016ehy S. Borowka, N. Greiner, G. Heinrich, S. Jones, M. Kerner, J. Schlenk, U. Schubert, and T. Zirke, Higgs Boson Pair Production in Gluon Fusion at Next-to-Leading Order with Full Top-Quark Mass Dependence,Phys. Rev. Lett. 117 (2016), no. 1 012001, erratum ibid 079901, [http://arxiv.org/abs/1604.06447arXiv:1604.06447].Borowka:2016ypzS. Borowka, N. Greiner, G. Heinrich, S. P. Jones, M. Kerner, J. Schlenk and T. Zirke,JHEP 1610, 107 (2016) doi:10.1007/JHEP10(2016)107 [arXiv:1608.04798 [hep-ph]]. Gehrmann:2013bgaT. Gehrmann, N. Greiner and G. Heinrich,Phys. Rev. Lett.111, 222002 (2013) doi:10.1103/PhysRevLett.111.222002 [arXiv:1308.3660 [hep-ph]].Badger:2013avaS. Badger, A. Guffanti and V. Yundin,JHEP 1403, 122 (2014) doi:10.1007/JHEP03(2014)122 [arXiv:1312.5927 [hep-ph]].Bern:2014vzaZ. Bern, L. J. Dixon, F. Febres Cordero, S. Hoeche, H. Ita, D. A. Kosower, N. A. Lo Presti and D. Maitre,Phys. Rev. D 90, no. 5, 054004 (2014) doi:10.1103/PhysRevD.90.054004 [arXiv:1402.4127 [hep-ph]]. Cullen:2011ac G. Cullen, N. Greiner, G. Heinrich, G. Luisoni, P. Mastrolia, et al., Automated One-Loop Calculations with GoSam,Eur.Phys.J. C72 (2012) 1889, [http://arxiv.org/abs/1111.2034arXiv:1111.2034].Cullen:2014yla G. Cullen et al., GOSAM-2.0: a tool for automated one-loop calculations within the Standard Model and beyond,Eur. Phys. J. C74 (2014), no. 8 3001, [http://arxiv.org/abs/1404.7096arXiv:1404.7096].Nogueira:1991ex P. Nogueira, Automatic Feynman graph generation,J.Comput.Phys. 105 (1993) 279–289.Vermaseren:2000nd J. Vermaseren, New features of FORM, http://arxiv.org/abs/math-ph/0010025math-ph/0010025.Kuipers:2012rf J. Kuipers, T. Ueda, J. Vermaseren, and J. Vollinga, FORM version 4.0, Comput.Phys.Commun. 184 (2013) 1453–1467, [http://arxiv.org/abs/1203.6543arXiv:1203.6543].Gleisberg:2008taT. Gleisberg, S. Hoeche, F. Krauss, M. Schonherr, S. Schumann, F. Siegert and J. Winter,JHEP 0902, 007 (2009) doi:10.1088/1126-6708/2009/02/007 [arXiv:0811.4622 [hep-ph]].Binoth:2010xtT. Binoth et al.,Comput. Phys. Commun.181, 1612 (2010) doi:10.1016/j.cpc.2010.05.016 [arXiv:1001.1307 [hep-ph]].Alioli:2013ndaS. Alioli et al.,Comput. Phys. Commun.185, 560 (2014) doi:10.1016/j.cpc.2013.10.020 [arXiv:1308.3462 [hep-ph]].Cullen:2010jvG. Cullen, M. Koch-Janusz and T. Reiter,Comput. Phys. Commun.182, 2368 (2011) doi:10.1016/j.cpc.2011.06.007 [arXiv:1008.0803 [hep-ph]].Mastrolia:2012buP. Mastrolia, E. Mirabella and T. Peraro,JHEP 1206, 095 (2012) Erratum: [JHEP 1211, 128 (2012)] doi:10.1007/JHEP11(2012)128, 10.1007/JHEP06(2012)095 [arXiv:1203.0291 [hep-ph]].vanDeurzen:2013saaH. van Deurzen, G. Luisoni, P. Mastrolia, E. Mirabella, G. Ossola and T. Peraro,JHEP 1403, 115 (2014) doi:10.1007/JHEP03(2014)115 [arXiv:1312.6678 [hep-ph]].Peraro:2014cbaT. Peraro,Comput. Phys. Commun.185, 2771 (2014) doi:10.1016/j.cpc.2014.06.017 [arXiv:1403.1229 [hep-ph]].Ossola:2006usG. Ossola, C. G. Papadopoulos and R. Pittau,Nucl. Phys. B 763, 147 (2007) doi:10.1016/j.nuclphysb.2006.11.012 [hep-ph/0609007].Mastrolia:2008jbP. Mastrolia, G. Ossola, C. G. Papadopoulos and R. Pittau,JHEP 0806, 030 (2008) doi:10.1088/1126-6708/2008/06/030 [arXiv:0803.3964 [hep-ph]]. Ossola:2008xqG. Ossola, C. G. Papadopoulos and R. Pittau,JHEP 0805, 004 (2008) doi:10.1088/1126-6708/2008/05/004 [arXiv:0802.1876 [hep-ph]].Mastrolia:2010nbP. Mastrolia, G. Ossola, T. Reiter and F. Tramontano,JHEP 1008, 080 (2010) doi:10.1007/JHEP08(2010)080 [arXiv:1006.0710 [hep-ph]].Heinrich:2010axG. Heinrich, G. Ossola, T. Reiter and F. Tramontano,JHEP 1010, 105 (2010) doi:10.1007/JHEP10(2010)105 [arXiv:1008.2441 [hep-ph]].Binoth:2008uqT. Binoth, J.-P. Guillet, G. Heinrich, E. Pilon and T. Reiter,Comput. Phys. Commun.180, 2317 (2009) doi:10.1016/j.cpc.2009.06.024 [arXiv:0810.0992 [hep-ph]].Cullen:2011kvG. Cullen, J. P. Guillet, G. Heinrich, T. Kleinschmidt, E. Pilon, T. Reiter and M. Rodgers,Comput. Phys. Commun.182, 2276 (2011) doi:10.1016/j.cpc.2011.05.015 [arXiv:1101.5595 [hep-ph]].Guillet:2013msaJ. P. Guillet, G. Heinrich and J. F. von Soden-Fraunhofen,Comput. Phys. Commun.185, 1828 (2014) doi:10.1016/j.cpc.2014.03.009 [arXiv:1312.3887 [hep-ph]].vanHameren:2010cpA. van Hameren,Comput. Phys. Commun.182, 2427 (2011) doi:10.1016/j.cpc.2011.06.011 [arXiv:1007.4716 [hep-ph]].Gleisberg:2008fvT. Gleisberg and S. Hoeche,JHEP 0812, 039 (2008) doi:10.1088/1126-6708/2008/12/039 [arXiv:0808.3674 [hep-ph]].Catani:1996vzS. Catani and M. H. Seymour,Nucl. Phys. B 485, 291 (1997) Erratum: [Nucl. Phys. B 510, 503 (1998)] doi:10.1016/S0550-3213(96)00589-5, 10.1016/S0550-3213(98)81022-5 [hep-ph/9605323]. Catani:2002hcS. Catani, S. Dittmaier, M. H. Seymour and Z. Trocsanyi,Nucl. Phys. B 627, 189 (2002) doi:10.1016/S0550-3213(02)00098-6 [hep-ph/0201036].Lai:2010vvH. L. Lai, M. Guzzi, J. Huston, Z. Li, P. M. Nadolsky, J. Pumplin and C.-P. Yuan,Phys. Rev. D 82, 074024 (2010) doi:10.1103/PhysRevD.82.074024 [arXiv:1007.2241 [hep-ph]]. Frixione:1998jhS. Frixione,Phys. Lett. B 429, 369 (1998) doi:10.1016/S0370-2693(98)00454-7 [hep-ph/9801442].Cacciari:2008gpM. Cacciari, G. P. Salam and G. Soyez,JHEP 0804, 063 (2008) doi:10.1088/1126-6708/2008/04/063 [arXiv:0802.1189 [hep-ph]]. Cacciari:2011maM. Cacciari, G. P. Salam and G. Soyez,Eur. Phys. J. C 72, 1896 (2012) doi:10.1140/epjc/s10052-012-1896-2 [arXiv:1111.6097 [hep-ph]].Binoth:2009rvT. Binoth, N. Greiner, A. Guffanti, J. Reuter, J.-P. Guillet and T. Reiter,Phys. Lett. B 685, 293 (2010) doi:10.1016/j.physletb.2010.02.010 [arXiv:0910.4379 [hep-ph]].Greiner:2011mpN. Greiner, A. Guffanti, T. Reiter and J. Reuter,Phys. Rev. Lett.107, 102002 (2011) doi:10.1103/PhysRevLett.107.102002 [arXiv:1105.3624 [hep-ph]]. Bevilacqua:2013taaG. Bevilacqua, M. Czakon, M. Krämer, M. Kubocz and M. Worek,JHEP 1307, 095 (2013) doi:10.1007/JHEP07(2013)095 [arXiv:1304.6860 [hep-ph]].
http://arxiv.org/abs/1706.08309v1
{ "authors": [ "Daniel Faeh", "Nicolas Greiner" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170626101957", "title": "Diphoton production in association with two bottom jets" }
equationsection[enumerate,1]label=(*),ref=* [enumerate,2]label=(*),ref=*Å𝔸 𝔹 ℂ 𝔻 𝔼 𝔽 𝔾 ℍ 𝕀 𝕁 𝕂 𝕃 𝕄 ℕ 𝕆 ℙ ℚ ℝ 𝕊 𝕋 𝕌 𝕍 𝕎 𝕏 𝕐 ℤ𝒜𝒢 ℳ 𝒮 ℬℋ 𝒩 𝒯 𝒞ℐ 𝒪 𝒰 𝒟𝒥 𝒫 𝒱 ℰ𝒦 𝒬 𝒲 ℱℒ ℛ 𝒳 𝒴𝒵*teo*Theorem *mainteoMain Theorem *teoATheorem A *teoBTheorem B *teoBpTheorem B' *teoCTheorem C teoTheorem[section] conjConjecture *conj*Conjecture questQuestion cor[teo]Corollary *corDCorollary D *afClaimlema[teo]Lemma prop[teo]Proposition propiedProperty
http://arxiv.org/abs/1706.08684v2
{ "authors": [ "Sylvain Crovisier", "Rafael Potrie", "Martín Sambarino" ], "categories": [ "math.DS" ], "primary_category": "math.DS", "published": "20170627063726", "title": "Finiteness of partially hyperbolic attractors with one-dimensional center" }
plain *theorem-nonTheorem *thmMain Theorem *lemLemma <ref> theoremTheorem[section] conjConjecture corollary[theorem]Corollary lemma[theorem]Lemma proposition[theorem]Proposition assumption[theorem]Affirmation remark[theorem]Remark conjecture[conj]Conjecture question[conj]Questiondefinition notationNotation[section] definitionDefinition[section] example[theorem]Example
http://arxiv.org/abs/1706.08919v1
{ "authors": [ "Mounir Benheddi" ], "categories": [ "math.GT" ], "primary_category": "math.GT", "published": "20170627160316", "title": "An Algebra Structure for the stable Khovanov homology of torus links" }
[email protected][cor1]Corresponding author. [email protected] School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China. NURBS curve is widely used in Computer Aided Design and Computer Aided Geometric Design. When a single weight approaches infinity, the limit of a NURBS curve tends to the corresponding control point. In this paper, a kind of control structure of a NURBS curve, called regular control curve, is defined. We prove that the limit of the NURBS curve is exactly its regular control curve when all of weights approach infinity, where each weight is multiplied by a certain one-parameter function tending to infinity, different for each control point. Moreover, some representative examples are presented to show this property and indicate its application for shape deformation.NURBS curve weights regular control curve toric degenerations shape deformation [2010] 65D17 68U07 § INTRODUCTION Non-Uniform Rational B-Spline (NURBS) method, as apopularcurve and surface modeling technology, is widely used in Computer Aided Geometric Design (CAGD), Computer Aided Design (CAD), and Geometric Modeling. NURBS method is the generalization of Bézier method,B-spline method, and rational Bézier method. It converts the curve and surface fitting tool to a unified representation. The theory of NURBS method can be referred tobooksand literatures by Piegl and Tiller <cit.> and Farin et al. <cit.>. The shape modification of existing objects plays an important role in geometric design systems. Shape modification of NURBS curves and surfaces can be achieved by means of knot vectors, control points and weights. It can be easily seen, by changing not only the positions of control points, but also the values of weights, the shape of curve and surface can be modified. Piegl and Tiller <cit.> explained the geometric meaning of the NURBS curve when a single weight approaches infinity: the curve tends to the corresponding control point. This property is crucial for interactive shape design. Fig. <ref> shows the effects of increasing a single weight ω_3 of a quadric NURBS curve. Based on this property, Piegl and Tiller <cit.> presented a method to finely tune the shape of a NURBS curve when a weight is allowed to be modified.Motivated by Piegl's work, scholars investigated how the shape of a NURBS curve changes when the weights of control points are modified. Au and Yuen <cit.>, and Sánchez-Reyes <cit.> introduced approaches toadjust the shape ofNURBS curves by modifying the weights and location of the control pointssimultaneously. Juhász <cit.> presented a method to adjust the shape of a NURBS curve by modifying some weights. Not only did he study modifying the location of an arbitrary point of a NURBS curve, but also considered the tangent direction. Zhang et al. <cit.> studied the effects of the NURBS curve by modifying two weights, he also found that single position and tangent constraints could be realized by a modification of three weights. The methods abovewere presented to finely tune the shape of NURBS curves when one or some weights are allowed to be modified, and they are all based on well-known geometric property of a single weight of NURBS curve. However, seldom work focuses on identifying the meaningof a NURBS curve when allweights tend to vary (even to infinity).Toric geometry is systematically presented and developed gradually in the 1970s. Sturmfels <cit.> introduced the theory of toric varieties, toric ideals and Gröbner bases in detail. In 2002, Krasauskas <cit.> presented a new kind of multi-sided surface, named toric patches, which is based upon toric variety from algebraic geometry and toric ideals from combinatorics.The toric patches include the classical rational Bézier curves and patches, and some multi-sided patches as special cases. As we know that control points and control nets control the shape of Bézier patches. But the geometric meaning of control points plus the edges, triangle,quadrangle formed by the control points is still unknown. Carl de Boor and Ron Goldman proposed a question in geometric modeling: What is the significance for modeling of such control structures (control points plus edges) <cit.>? It's worth noting that control points plus edges is not necessarily control polygon.In 2011, García-Puente, Sottile and Zhu <cit.> explained the geometric meaning of control structures (surfaces) of toric patches when allweights tend to infinity by using the theory of toric varieties, toric ideals and toric degenerations, called the toric degenerations of Bézier patches.Furthermore, Zhu <cit.> presented the toric degenerations of toric varieties and toric ideals induced by regular decomposition. Zhu and Zhao <cit.> also gave the geometric conditions on the control polygon and control points set which guarantees the injectivity (one-to-one property) of rational Bézier curve and surfaces.Our purpose of this paper is to explain the geometric meaning of the limit of NURBS curveswhen all of weights tend to infinity. We define the regular control curve of the NURBS curve and prove that the limit of NURBS is exactly its regular control curve when all of weights approach infinity. This property generalizes the geometric meaning of a single weight of NURBS curve and explains the geometric meaning of weights of NURBS curve. Our result provides the possible applications for the injectivity checking of NURBS curve which plays an important role in image warping and morphing, 3D deformation and volume morphing. Furthermore, we also provide an idea for shape modification and deformation of NURBS curve by altering many weights.The paper is organized as follows. In Section <ref>, we recall the definition of rational Bézier curve, toric degenerations of rational Bézier curves, the definition of NURBS curve and knot insertion algorithm. In Section <ref>,we define the “regular control curve”, a control structure of a NURBS curve by the regular decomposition, and prove that which is the limit of the NURBS curve. Since this property is proved based on toric degeneration of Bézier curve, we say it is the toric degeneration of NURBS curve. Moreover, we observe that if a curve is the limit of a NURBS curve for the sequence of weights, then this curve must be a regular control curve induced by some regular decomposition. Some representative examples are illustrated to show the degeneration property of NURBS curves and point out its application for shape deformation in Section <ref>.Finally, Section <ref> concludes the whole paper.§ PRELIMINARIES §.§ Rational Bézier curvesFor given control points b_i∈ℝ^d (d=2,3) and weights ω_i, i=0,1,⋯,m, a rational Bézier curve of degree m is defined byF(v)=∑_i=0^mω_i b_iB_i^m(v)/∑_i=0^mω_iB_i^m(v),v∈ [0,1],where B_i^m(v)=mi(1-v)^m-iv^i areBernsteinbasis functions and the control polygon of the curve is the union of segments b_0b_1,⋯,b_m-1b_m.In 2002, a new method to construct the multi-sided surface patches,toric patches,was presented by Krasauskas <cit.>. Following the Krasauskas' toric patches,the toric Bézier curve can be defined. Given a set of finite lattice points 𝒜={a_0,a_1,⋯,a_m}⊂ℤ, letconv(𝒜) be the convex hull oflattice points of the set 𝒜, and the interval Δ_𝒜=conv(𝒜) can be defined by {x∈ℝ| 0≤ l_0(x), 0≤ l_1(x)} where l_0(x)=x-a_0,l_1(x)=a_m-x and we assume that a_i<a_i+1, i=0,1,⋯,m-1. The following definition we refer to <cit.>. Given a set of finitelattice points𝒜={a_0,a_1, ⋯,a_m}⊂ℤ,control points ℬ={b_a_i| a_i∈𝒜}⊂ℝ^d, d=2,3, andweights ω={ω_a_i>0 | a_i ∈𝒜}, the parametric curve,F_𝒜,ω ,ℬ(x)=∑_i=0^mω _a_ib_a_iβ_a_i,𝒜(x)/∑_i=0^mω _a_iβ_a_i,𝒜(x) , x ∈Δ_𝒜,is called a toric Bézier curve, where the functions β_a_i,𝒜(x)=c_a_il_0(x)^l_0(a_i)l_1(x)^l_1(a_i) are calledtoric Bézier basis functions, the coefficientc_a_i>0 and the union of segments b_a_0b_a_1,⋯,b_a_m-1b_a_m is called the control polygon of the curve.Write F_𝒜,ω ,ℬ for the image F_𝒜,ω ,ℬ(Δ_𝒜). Note that when we set 𝒜={0,1,⋯,m},Δ_𝒜=[0,m], l_0(x)=x, l_1(x)=m-x is the boundary equations of Δ_𝒜, then toric Bézier basis is β_i,𝒜(x)=c_il_0(x)^l_0(i)l_1(x)^l_1(i)=c_ix^i(m-x)^m-i, for any i∈𝒜. We assume x=nv and c_i=mim^-m, then β_i,𝒜(x) becomes the classical Bernstein basis B_i^m(v) and the toric Bézier curve can be parameter transformed as the rational Bézier curve.Since parameter transformation of curve does not affect the shape of curve, the definition of toric Bézier curve is equivalent to the definition of rational Bézier curve. Moreover, because of the concept of the set of finite lattice points 𝒜 is introduced in toric Bézier curve, the weights and control points of curve can been indexed by the lattice points of𝒜. For convenience of analysis, we will use Definition <ref> to represent a rational Bézier curve. §.§ Toric degenerations of rational Bézier curves In <cit.>, for explaining the geometric meaning of control surfaces of toric patches, García-Puente, Sottile and Zhu defined the regular decomposition of a set of finite lattice points. And following this work, Zhu and Zhao <cit.> proposed the definition of regular decomposition of 𝒜 for dealing with the self-intersections of rational Bézier curves. In this section, we first recall some notations from <cit.> and <cit.>.Let 𝒜⊂ℤ be a set of finite lattice points and suppose that λ:𝒜→ℝ is a function. We use a lifting function λ to lift all the lattice points of 𝒜 into ℝ^2. Let P_λ be the convex hull of the lifted points of the lattice points,P_λ =conv{(a_i,λ(a_i))| a_i∈𝒜}⊂ℝ^2.Each face of P_λ has an outward pointing normal vector, and its upper edges are those whose normal vector has positive last coordinate.The union of the upper edges is the upper hull of P_λ. If we project each of these upper edges back to ℝ, then we get a set of closed intervals of Δ_𝒜 and the union of them covers Δ_𝒜. These closed intervalsinduce aregular domain decomposition 𝒯_λ of Δ_𝒜. All the lattice points of 𝒜 which belong to the same closed interval and whose lifted points lie on a common “upper edge”, we get subset s_j of 𝒜. Then the union of these subsets of 𝒜 is called a regular decomposition S_λ of 𝒜 induced byλ.Suppose that a lifting function λinduces a regular decomposition S_λ of 𝒜. If s_j ∈ S_λis a subset, then the weights ω|_ s_j={ω_a_i| a_i ∈ s_j} and the control points ℬ|_ s_j= {b_a_i| a_i ∈ s_j} indexed by elements of s_jcan construct a rational Bézier curve by Definition <ref>, denoted by F_s_j,ω|_ s_j,ℬ|_ s_j.The regular control curve of the rational Bézier curve F_𝒜,ω ,ℬ induced by the regular decomposition S_λ is the union of those curves <cit.>,F_𝒜,ω ,ℬ(S_λ):=⋃_s_j∈ S_λF_s_j,ω|_ s_j, ℬ|_ s_j.Given a set of finite integers 𝒜={a_0,⋯,a_m}⊂ℤ, control points ℬ={b_a_i| a_i ∈𝒜}⊂ℝ^d, d=2,3, and weights ω={ω_a_i>0 | a_i ∈𝒜}, the curve,F_𝒜,ω_λ(t) ,ℬ(x;t):=∑_i=0^mt^λ(a_i)ω_a_ib_a_iβ _a_i,𝒜(x)/∑_i=0^mt^λ(a_i)ω _a_iβ _a_i,𝒜(x),x ∈Δ_𝒜is called a rational Bézier curve parameterized by t, where ω_λ(t):={t^λ(a_i)ω_a_i| a_i∈𝒜}. lim_t→∞F_𝒜,ω _λ (t),ℬ=F_𝒜,ω ,ℬ(S_λ)By Theorem <ref>, we know that if the control points ℬ are fixed but the parameter t →∞, the regular control curve induced by the lifting function λ is exactly the limit of rational Bézier curve F_𝒜,ω_λ(t) ,ℬ. García-Puente et al. <cit.> provedthis property and which is calledtoric degeneration of a rationalBézier curve. Furthermore, they also proved the following result, which is converse to Theorem <ref>.If F⊂ℝ^3is a set for which there is a sequence ω^(1),ω^(2),⋯ of weights so thatlim_τ→∞F_𝒜,ω^(τ),ℬ=F,then there exist a lifting function λ:𝒜→ℝ and weights ω such that F=F_𝒜,ω,ℬ(S_λ) is a regular control curve. Let 𝒜={0,1,2,3,4} and Δ_𝒜=[0,4]. For given control points ℬ={b_0,b_1,b_2, b_3,b_4} and weights ω={3,4,2,1.5,1}, the quartic rational Bézier curve F_𝒜,ω ,ℬ is shown in Fig. <ref>. Suppose a lifting function λ_1 take the values{2,3,4,3,2} at thelattice points of 𝒜 (see Fig. <ref>), which induces a regular domain decomposition of Δ_𝒜,𝒯_λ_1={[0,2],[2,4]}, and a regular decomposition of 𝒜, S_λ_1={{0,1,2},{2,3,4}}. Another lifting function λ_2 take the values {2,3,4,2,3} at thelattice points of 𝒜 (see Fig. <ref>), which induces the same regular domain decomposition with λ_1 and a different regular decomposition with λ_1, that is S_λ_2={{0,1,2},{2,4}}, since the lifted point (3,λ_2(3)) does not lie on any upper edge of the P_λ_2 (see Fig. <ref>).For the subset s_1={0,1,2}∈ S_λ_2, a rational quadratic Bézier curve F_s_1,ω|_ s_1,ℬ|_ s_1 can be constructed by the corresponding control points ℬ|_ s_1={b_0,b_1,b_2} and weights ω|_ s_1={3,4,2}, and parametric domain of the curve is [0,2]. For the subset s_2={2,4} of S_λ_2, a linear Bézier curve F_s_2,ω|_ s_2,ℬ|_ s_2 can be constructed by the corresponding control points ℬ|_ s_2={b_2,b_4} and weights ω|_ s_2={2,1},and parametric domain of the curve is [2,4]. Then the union of those two curves F_s_1,ω|_s_1,ℬ|_s_1∪F_s_2,ω|_ s_2,ℬ|_ s_2is the regular control curve ofF_𝒜,ω ,ℬ (as Fig. <ref> shown). By Theorem <ref>, the limit of a rational Bézier curve F_𝒜,ω _λ (t),ℬ is the regular control curve shown in Fig. <ref> while the parameter t →∞, where ω_λ_2(t)={3t^2,4t^3,2t^4,1.5t^2,t^3}.§.§ NURBS curves and knot insertion A pth degree NURBS curve with the control points P={P_0,P_1,⋯, P_n+p-1} and weights ω={ω_0,ω_1,⋯,ω_n+p-1} are defined byR(u)=∑_i=0^n+p-1ω_iP_iN_i,p(u)/∑_i=0^n+p-1ω_iN_i,p(u),u ∈ [0,1]where {N_i,p(u)} are theB-spline basis functions of degree p defined on knot vectorU={0,⋯ ,0_p+1,u_1,u_2,⋯ ,u_n-1,1,⋯ ,1_p+1}.The union of segments P_0P_1,⋯,P_n+p-2P_n+p-1 is called the control polygon of the curve.Obviously, the NURBS curve R(u) defined on the knot vector Eq. (<ref>) satisfies the endpoint interpolation property. The geometric meaning of a single weight of NURBS curve can be explained as: within the influence range (u_α,u_β) of the control point P_i,lim_ω _i→ + ∞R(u)=P_i, ∀ u∈ (u_α,u_β),where u_α,u_β∈U and β-α=p+1. Fig. <ref> shows the NURBS curve moves toward or away from the control points when one weight increases or decreases.In order to study toric degeneration of NURBS curve, we recall the conversion from NURBS curvesto Bézier form by knot insertion<cit.>. Without loses of generality, in the rest of paper we assume each of the interior knot u_i of U for NURBS curve R(u) of degree p defined as Eq. (<ref>) iswith multiplicity 1 and u_i<u_i+1, (i=1,2,⋯,n-2). For NURBS curve R(u), we can insert each existing knot p-1 times to make all of interior knotsu_i (i=1,2,⋯,n-1) with multiplicity p. Note that the result of knot insertion is independent of the orderingof inserting knots. If a interior knot is inserted, weights and control points ofNURBS curve will be recomputed. When (n-1)(p-1) interior knots are inserted, the weightsω^(n-1)(p-1)={ω^(n-1)(p-1)_0,ω^(n-1)(p-1)_1,⋯,ω^(n-1)(p-1)_np},and control pointsP^(n-1)(p-1)={P^(n-1)(p-1)_0,P^(n-1)(p-1)_1,⋯,P^(n-1)(p-1)_np}of NURBS curve R(u) are generated, which defined on the new knot vector U^(n-1)(p-1)[ ={0,⋯ ,0_p+1,u_1,⋯ ,u_1_p,⋯ , u_n-1,⋯,u_n-1_p,1,⋯ ,1_p+1}. ] Letu_0=0,u_n=1. By the parameter transformation v=u-u_i/u_i+1-u_i, the NURBS curve R(u) is a pth degree rational Bézier curve F^m(v) in every interval [u_m-1,u_m], m=1,2,⋯,n, with the weights[{ω^(n-1)(p-1)_0,ω^(n-1)(p-1)_1,⋯ ,ω^(n-1)(p-1)_p},; {ω^(n-1)(p-1)_p,ω^(n-1)(p-1)_p+1,⋯ ,ω^(n-1)(p-1)_2p},; ⋯ ,; {ω^(n-1)(p-1)_(n-1)p,ω^(n-1)(p-1)_(n-1)p+1,⋯ ,ω^(n-1)(p-1)_np}, ]and control points[{P^(n-1)(p-1)_0,P^(n-1)(p-1)_1,⋯ ,P^(n-1)(p-1)_p},; {P^(n-1)(p-1)_p,P^(n-1)(p-1)_p+1,⋯ ,P^(n-1)(p-1)_2p},; ⋯ ,; {P^(n-1)(p-1)_(n-1)p,P^(n-1)(p-1)_(n-1)p+1,⋯ ,P^(n-1)(p-1)_np}. ]Then the NURBS curve R(u) is transformed into the union of those n pieces of rational Bézier curves.The representations of the weights in Eq. (<ref>) and control points in Eq. (<ref>) are discussed in Theorem <ref>, which relate to the original weights and control points. Theorem <ref> can be proved step by step via knot insertion <cit.>, which will be omitted here.Let the NURBS curve R(u) of degree p defined in Eq. (<ref>) with the control points ℬ={P_0^0,P_1^0,⋯,P_n+p-1^0} and the weights ω={ω_0^0,ω_1^0,⋯,ω_n+p-1^0}. After inserting knots tomake all interior knots u_i  (i=1,2,⋯,n-1) in the knot vector U={0,⋯ ,0_p+1,u_1,u_2,⋯ ,u_n-1,1,⋯ ,1_p+1} with multiplicity p, the generated weights ω^(n-1)(p-1) in Eq. (<ref>) denoted by ω̅ and the generated control points P^(n-1)(p-1) in Eq. (<ref>) denoted by ℬ̅ which satisfy a certain relationship with the original weights ω and the control points ℬ, as Fig. <ref>shown, where coefficients f_a;b^i;j (i=a,a+1,⋯,b)are relevant to ω^0_i, P^0_i, computed via knot insertion, and ∑_i=a^b f_a;b^i;j=1, (a≤ b) in every element of ω̅ andℬ̅.§ TORIC DEGENERATIONS OF NURBS CURVES This paper focuses on what happens when all of weights of a NURBS curve assume extreme values. Since the curve is pulled towards the corresponding control point when a single weight increases, furthermore, does the NURBS curve approximate all of control points simultaneously when all of weights approach infinity. In this section, by defining a kind of control structure of a NURBS curve, we present the toric degeneration of NURBS curve by using the toric degeneration of rational Bézier curve and indicate that the NURBS curve approximates to this control structure when all of weights approach infinity.For the convenience, we will use the following representation to represent a NURBS curve R(u) in the rest of paper, which is similar with Definition <ref> and equivalent to Definition <ref>. Given a set of finite lattice points 𝒜 ={0,1,⋯,n+p-1}⊂ℤ, Δ_𝒜=conv(𝒜)=[0,n+p-1], control points ℬ={P_i^0 | i∈𝒜}⊂ℝ^d, d=2,3, and weights ω={ω_i^0>0| i∈𝒜}, the parametric curve,R_𝒜,ω,ℬ(u):=∑_i=0^n+p-1ω_i^0 P_i^0N_i,p(u)/∑_i=0^n+p-1ω_i^0 N_i,p(u),u∈[0,1]is called a NURBS curve of degree p, where theB-spline basis functions N_i,p(u) are defined on the knot vectorU^0={0,⋯ ,0_p+1,u_1,u_2,⋯ ,u_n-1,1,⋯ ,1_p+1}. We set 𝒜̅={0,1,⋯,np-1,np} and Δ_𝒜̅=[0,np]. The weights ω̅={ω_i^(n-1)(p-1)| i∈𝒜̅} and control points ℬ̅={P_i^(n-1)(p-1)| i ∈𝒜̅} can be computed by Theorem <ref>, Eq. (<ref>) and Eq. (<ref>). For NURBS curve R_𝒜,ω,ℬ, we transform the curve into the union of n pieces of rational Bézier curves via knot insertion, and denote it byR_𝒜̅,ω̅,ℬ̅. Let 𝒜^m={(m-1)p,(m-1)p+1,⋯,mp}⊂𝒜̅ and Δ_𝒜^m=[(m-1)p,mp]⊂Δ_𝒜̅. Then ⋃_m=1^n𝒜^m=𝒜̅. By Definition <ref>, the mth piece rational Bézier curve of NURBS curve R_𝒜̅,ω̅,ℬ̅ is denoted by F_𝒜^m,ω^m,ℬ^m, where the weights ω^m={ω^(n-1)(p-1)_i | i∈𝒜^m} and control points ℬ^m={P^(n-1)(p-1)_i | i∈𝒜^m}are indexed by the lattice points of 𝒜^m. Then we haveR_𝒜,ω,ℬ=R_𝒜̅,ω̅,ℬ̅=⋃_m=1^nF_𝒜^m,ω^m,ℬ^m.We study the change of a NURBS curve when the weights of all control points approach infinite. Since the speed of weight of each control point tends to infinite may be different, we introduce the concept of the value of a lifted point λ(i) associated with the lattice point i. Assume the speed of weight ω_i^0 tends to infinite is t^λ(i), then we replace the weight ω_i^0 with t^λ(i)ω_i^0. If the value of a lifted point λ(i) has a larger value, the weight ω_i^0 of control point P_i^0 goes faster to infinity. In this paper, we assume a lifting function λ: i→ (i,λ(i)) to lift all the lattice points i of 𝒜 into ℝ^2 (as Fig. <ref> shown). If we set ω_λ(t):={t^λ(i)ω _i^0 | i∈𝒜}, then the curve,R_𝒜,ω_λ(t),ℬ(u;t):=∑_i=0^n+p-1 t^λ(i)ω_i^0 P_i^0N_i,p(u)/∑_i=0^n+p-1 t^λ(i)ω_i^0 N_i,p(u),u∈[0,1]is called NURBS curve R_𝒜,ω,ℬ(u) of degree p defined in Eq. (<ref>) parameterized by t. Suppose that the weights ω={ω^0_0,ω^0_1,⋯,ω^0_n+p-1}is parameterized by t, ω_λ(t):={t^λ(i)ω _i^0 | i∈𝒜}. By Theorem <ref>, we take ω_λ(t) as the new weights and the control points ℬ stay the same. After inserting knots, theweights ω̅_λ(t) ={ω_i^(n-1)(p-1)| i∈𝒜̅}and control points ℬ̅_λ(t)={P_i^(n-1)(p-1)| i ∈𝒜̅} with parameter t can be computed. For NURBS curve R_𝒜,ω_λ(t),ℬ, we transform the curve into the union of n pieces of rational Béziercurves, and denote it byR_𝒜̅,ω̅_λ(t),ℬ̅_λ(t). By Definition <ref>, the mth piece rational Bézier curve of NURBS curve R_𝒜̅,ω̅_λ(t),ℬ̅_λ(t) is denoted by F_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t), where the weights ω^m_λ^m(t)={ω^(n-1)(p-1)_i | i∈𝒜^m}⊂ω̅_λ(t) and control points ℬ^m_λ^m(t)={P^(n-1)(p-1)_i | i∈𝒜^m}⊂ℬ̅_λ(t)are indexed by the lattice points of 𝒜^m. Then we haveR_𝒜,ω_λ(t),ℬ=R_𝒜̅,ω̅_λ(t),ℬ̅_λ(t)=⋃_m=1^nF_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t). We study the n pieces of rational Bézier curves F_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t), m=1,2,⋯,n. For the first piece rational Bézier curve F_𝒜^1,ω^1_λ^1(t),ℬ^1_λ^1(t), the control pointsℬ^1_λ^1(t)={P^(n-1)(p-1)_0,P^(n-1)(p-1)_1,⋯,P^(n-1)(p-1)_p} ={P _0^0,P_1^0,∑_i=1^2f_1;2^i;2t^λ(i)ω _i^0P_i^0/∑_i=1^2f_1;2^i;2t^λ(i)ω _i^0,⋯,∑_i=1^pf_1;p^i;0t^λ(i)ω _i^0P_i^0/∑_i=1^pf_1;p^i;0t^λ(i)ω _i^0},andweightsareω^1_λ^1(t)={ω^(n-1)(p-1)_0,ω^(n-1)(p-1)_1,⋯,ω^(n-1)(p-1)_p} ={t^λ (0)ω _0^0,t^λ (1)ω _1^0, ∑_i=1^2f_1;2^i;2t^λ(i)ω _i^0,⋯,∑_i=1^pf_1;p^i;0t^λ(i)ω _i^0}.For m=2,3,⋯,n-1, the control points and weights of the mth piece rational Bézier curve F_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t) can be computed byℬ^m_λ^m(t)={P^(n-1)(p-1)_(m-1)p,P^(n-1)(p-1)_(m-1)p+1,⋯,P^(n-1)(p-1)_mp} ={∑_i=m-1^p+m-2f_m-1;p+m-2^i;0t^λ(i)ω _i^0P_i^0/∑_i=m-1^p+m-2f_m-1;p+m-2^i;0t^λ(i)ω _i^0,∑_i=a^bf_a;b^i;1t^λ(i)ω _i^0P_i^0/∑_i=a^bf_a;b^i;1t^λ(i)ω _i^0,⋯ ,∑_i=a^bf_a;b^i;p-1t^λ(i)ω _i^0P_i^0/∑_i=a^bf_a;b^i;p-1t^λ(i)ω _i^0,∑_i=m^p+m-1f_m;p+m-1^i;0t^λ(i)ω _i^0P_i^0/∑_i=m^p+m-1f_m;p+m-1^i;0t^λ(i)ω _i^0}, ω^m_λ^m(t)={ω^(n-1)(p-1)_(m-1)p,ω^(n-1)(p-1)_(m-1)p+1,⋯,ω^(n-1)(p-1)_mp} ={∑_i=m-1^p+m-2f_m-1;p+m-2^i;0t^λ(i)ω _i^0,∑_i=a^bf_a;b^i;1t^λ(i)ω _i^0,⋯ ,∑_i=a^bf_a;b^i;p-1t^λ(i)ω _i^0, ∑_i=m^p+m-1f_m;p+m-1^i;0t^λ(i)ω _i^0},where a=m,b=p+m-2. For the last piece rational Bézier curve F_𝒜^n,ω^n_λ^n(t),ℬ^n_λ^n(t), the control points and weights areℬ^n_λ^n(t)={P^(n-1)(p-1)_(n-1)p,P^(n-1)(p-1)_(n-1)p+1,⋯,P^(n-1)(p-1)_np} ={∑_i=n-1^p+n-2f_n-1;p+n-2^i;0t^λ(i)ω _i^0P_i^0/∑_i=n-1^p+n-2f_n-1;p+n-2^i;0t^λ(i)ω _i^0,∑_i=n^p+n-2f_n;p+n-2^i;1t^λ(i)ω _i^0P_i^0/∑_i=n^p+n-2f_n;p+n-2^i;1t^λ(i)ω _i^0, ⋯,P_n+p-2^0,P_n+p-1^0}, ω^n_λ^n(t)={ω^(n-1)(p-1)_(n-1)p,ω^(n-1)(p-1)_(n-1)p+1,⋯,ω^(n-1)(p-1)_np} ={∑_i=n-1^p+n-2f_n-1;p+n-2^i;0t^λ(i)ω _i^0,∑_i=n^p+n-2f_n;p+n-2^i;1t^λ(i)ω _i^0,⋯ ,t^λ (n+p-2)ω_n+p-2^0,t^λ (n+p-1)ω_n+p-1^0}. Consider themth piece rational Bézier curve F_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t), we discuss the weights ω^m_λ^m(t) and the location ofcontrol pointsℬ^m_λ^m(t) when t→∞. Suppose that the control point P_(m-1)p+j^(n-1)(p-1)∈ℬ^m_λ^m(t), j=0,1,⋯,p, is formed by the convex combination of the originalcontrol pointsP_a^0,P_a+1^0,⋯,P_b^0 of ℬ, where a,b ∈𝒜={0,1,⋯,n+p-1} and a ≤b, corresponding to the values of the lifted points, λ(a),λ(a+1),⋯,λ(b). Then by Theorem <ref>, we haveP_(m-1)p+j^(n-1)(p-1)=∑_i=a^bf_a;b^i;jt^λ(i)ω _i^0 P_i^0/∑_i=a^bf_a;b^i;jt^λ(i)ω _i^0,j=0,1,⋯,p.Suppose that the set ψ be the set of lattice points of 𝒜 corresponding to the largest value of {λ (a),λ(a+1),⋯,λ(b)},then we havelim_t→∞P_(m-1)p+j^(n-1)(p-1)=∑_i∈ψf_a;b^i;jω_i^0P_i^0/∑_i∈ψf_a;b^i;jω_i^0,and its corresponding weight lim_t→∞ω_(m-1)p+j^(n-1)(p-1)=∑_i∈ψf_a;b^i;jω_i^0.Using this method, we can get the geometric position of every control point in ℬ^m_λ^m(t) and the value of its corresponding weight. We set ℬ̅^m=lim_t→∞ℬ^m_λ^m(t) and ω̅^m=lim_t→∞ω^m_λ^m(t) denote the collections of the control points of ℬ^m_λ^m(t) and weights of ω^m_λ^m(t) when t→∞, respectively. According to regular decomposition of rational Bézier curve presented in Section <ref>, we can define a regular decomposition S_λ^m of the mth piece rational Bézier curve F_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t) for m=1,⋯,n. In order to get regular decompositions of rational Bézier curves, we will discuss the values of the lifted points of every rational Bézier curve. Consider the function λ: i→ (i,λ(i)), i ∈𝒜={0,1,⋯,n+p-1} induces a regular decomposition of 𝒜^m by lifting 𝒜^m into ℝ^2 as follows, where m=1,⋯,n. For m=1, the values of the lifted points associated with the lattice points of 𝒜^1 are assigned asλ^1={λ(0),λ(1),max{λ(1),λ(2)},⋯,max{λ(1),⋯,λ(p)}}.For m=2,⋯,n-1, the values of the lifted points associated with the lattice points of 𝒜^mare assigned asλ^m={max{λ(m-1),⋯,λ(p+m-2)},χ,χ,⋯,χ_p-1,max{λ(m),⋯,λ(p+m-1)}},where χ=max{λ(m),⋯,λ(p+m-2)}.For m=n, the values of the lifted points associated with the lattice points of 𝒜^nare assigned asλ^n={max{λ(n-1),⋯,λ(p+n-2)},max{λ(n),⋯,λ(p+n-2)}, ⋯, max{λ(n+p-3),λ(p+n-2)},λ(p+n-2),λ(p+n-1)}.According to the above values of the lifted points, theregular decomposition S_λ^m of 𝒜^m induced by λ^m can be obtained directly. The union of S_λ^m for m=1,⋯,n is called the regular decomposition of 𝒜̅, denoted by S̅_λ. Let s_j^m be the subset of S^m_λ, ω^m|_s_j^m={ω^(n-1)(p-1)_i| i∈ s_j^m} and ℬ^m|_ s_j^m={P^(n-1)(p-1)_i| i∈ s_j^m} be the weights and control points indexed by elements of s_j^m. ω^m|_s_j^m and ℬ^m|_ s_j^m can construct a rational Bézier curve, denoted by F_s_j^m,ω^m|_s_j^m,ℬ^m|_ s_j^m. By the regular decomposition S^m_λ of 𝒜^m induced by λ^m , control points ℬ̅^m and weights ω̅^m, we get the regular control curveF_𝒜^m,ω^m,ℬ^m(S_λ^m)=⋃_s_j^m∈ S_λ^mF_s_j^m,ω^m|_s_j^m,ℬ^m|_ s_j^mof F_𝒜^m,ω^m,ℬ^m induced by S_λ^m.Given a set of finitelattice points𝒜 ={0,1,⋯,n+p-1}⊂ℤ, the control points ℬ={P_i^0 | i∈𝒜}⊂ℝ^d, d=2,3,andweights ω={ω_i^0>0 | i∈𝒜}, suppose that we have a regular decomposition S̅_λ of 𝒜̅ induced by a lifting function λ, then the curveR_𝒜,ω,ℬ(S̅_λ) =R_𝒜̅,ω̅,ℬ̅(S̅_λ) =⋃_m=1^nF^m_𝒜^m,ω^m,ℬ^m(S_λ^m) =⋃_m=1^n⋃_s_j^m∈ S_λ^mF^m_s_j^m,ω^m|_ s_j^m,ℬ^m|_ s_j^m is called the regular control curve of NURBS curve R_𝒜,ω,ℬ induced by the regular decomposition S̅_λ.Note that the regular control curve R_𝒜,w,ℬ(S̅_λ) is a C^0 spline curve and we will show that it is the limit of the NURBS curve R_𝒜,ω_λ(t),ℬ when t→∞.LetR_𝒜,ω,ℬ(u)=∑_i=0^3ω_i^0 P_i^0N_i,2(u)/∑_i=0^3ω_i^0N_i,2(u),u∈[0,1]be a quadratic NURBS curve defined on knot vector U^0={0,0,0,1/4,1,1,1} with the control points ℬ={P^0_0,P^0_1,P^0_2,P^0_3} andweights ω={ω^0_0,ω^0_1,ω^0_2,ω^0_3}={3,1,2,2}, where 𝒜={0,1,2,3} (see Fig. <ref>). The curve R_𝒜,ω,ℬ consists of two pieces of rational Bézier curves after knot insertion (see Fig. <ref>).(1) Suppose that the lifting function λ_1 has the assignments {1,3,2,1} at the lattice points of 𝒜, then we get the regular decomposition S̅_λ_1={{{0,1},{1,2}},{{2,3,4}}} of 𝒜̅={0,1,2,3,4}. The regular control curve R_𝒜,ω,ℬ(S̅_λ) is the union of two parts. The first one is the union of linear Bézier curves formed by control points {P^1_0=P^0_0,P^1_1=P^0_1}, {P^1_1,P^1_2}, and the second part isa rational quadraticBézier curve formed by control points {P^1_2,P^1_3=P^0_2,P^1_4=P^0_3} and their corresponding weights. Since the control points P^1_2 goes to P^1_1=P^0_1 while t goes to infinity, then R_𝒜,ω,ℬ(S̅_λ)degenerates into the union of a line segment P^0_0P^0_1 and a rational quadraticBézier curvesby control points {P^0_1,P^0_2,P^0_3} and their corresponding weights (shown in Fig. <ref>).(2) Suppose that the lifting function λ_2 has the assignments {1,3,2,0} at the lattice points of 𝒜, we get the regular decomposition S̅_λ_2={{{0,1},{1,2}},{{2,3},{3,4}}}.The regular control curve R_𝒜,ω,ℬ(S̅_λ) is the union of two parts. The first one is the union of linear Bézier curves formed by control points {P^1_0,P^1_1} and{P^1_1,P^1_2}, where as the second part consists of two linear Bézier curves formed by{P^1_2,P^1_3} and {P^1_3,P^1_4}. Since the control points P^1_2 goes to P^1_1=P^0_1 while t goes to infinity, then R_𝒜,ω,ℬ(S̅_λ)degenerates into the union of three line segments P^0_0P^0_1∪P^0_1P^0_2∪P^0_2P^0_3,which is exactly the control polygon of the NURBS curve R_𝒜,ω,ℬ (see Fig. <ref>). We will explain how to get the regular control curve of a NURBS curve in Section <ref> in detail. The following result shows that the regular control curve of the NURBS curve R_𝒜,ω,ℬ is just the limit of R_𝒜,ω_λ(t),ℬ when t goes to infinity.Let R_𝒜,ω,ℬ be the NURBS curve of degree pwith the control points ℬ={P_i | i∈𝒜}⊂ℝ^d, d=2,3,and weights ω={ω_i>0 | i∈𝒜} defined on the knot vector Eq. (<ref>), where 𝒜 ={0,1,⋯,n+p-1}⊂ℤ. Supposethat R_𝒜,ω,ℬ(S̅_λ) is the regular control curve of R_𝒜,ω,ℬ induced by the regular decomposition S̅_λ of 𝒜̅ and lifting function λ, thenlim_t→∞R_𝒜,ω_λ(t),ℬ=R_𝒜,ω,ℬ(S̅_λ).Proof : By knot insertion, the NURBS curve R_𝒜,ω_λ(t),ℬ can be converted into n pieces of rational Bézier curves F_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t), m=1,2,⋯,n.Let S_λ^m be the regular decomposition of 𝒜^m by λ^m. Consider the rational Bézier curve F_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t), we haveF_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t)-F_𝒜^m,ω^m,ℬ^m(S_λ^m)≤F_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t)-F_𝒜^m,ω^m_λ^m(t),ℬ^m +F_𝒜^m,ω^m_λ^m(t),ℬ^m-F_𝒜^m,ω^m,ℬ^m(S_λ^m) .where · is the Hausdorff distance between two subsets of ℝ^3 <cit.>. Since lim_t→∞ℬ^m_λ^m(t)=ℬ^m, we havelim_t→∞F_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t)-F_𝒜^m,ω^m_λ^m(t),ℬ^m =0.By Theorem <ref> and lim_t→∞ω^m_λ^m(t)=ω^m,when the control points ℬ^m are fixed but the parameter t →∞, the regular control curve induced by the regular decomposition S_λ^m of 𝒜^m is exactly the limit of rational Bézier curve, that islim_t→∞F_𝒜^m,ω^m_λ^m(t),ℬ̅^m=F_𝒜^m,ω^m,ℬ^m(S_λ^m),then we getlim_t→∞F_𝒜^m,ω^m_λ^m(t),ℬ̅^m-F_𝒜^m,ω^m,ℬ^m(S_λ^m)=0.This meanslim_t→∞F_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t)=F_𝒜^m,ω^m,ℬ^m(S_λ^m). Note that the NURBS curve R_𝒜,ω_λ(t),ℬ can be convert into n pieces of rational Bézier curves after knot insertion,R_𝒜,ω_λ(t),ℬ=⋃_m=1^nF_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t),then thelimit of NURBS curve R_𝒜,ω_λ(t),ℬ can be written as the limit of the union of those rational Bézier curves F_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t), that islim_t→∞R_𝒜,ω_λ(t),ℬ=lim_t→∞⋃_m=1^nF_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t) =⋃_m=1^nlim_t→∞F_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t) =⋃_m=1^nF_𝒜^m,ω^m,ℬ^m(S_λ^m).By Definition <ref>, we getlim_t→∞R_𝒜,ω_λ(t),ℬ=R_𝒜,ω,ℬ(S̅_λ),and this ends the proof. This property of NURBS curveis called the toric degeneration of NURBS curve following the toric degenerations of Bézier curves and surfaces. The following result is converse to Theorem <ref>.Let 𝒜={0,1,⋯,n+p-1}⊂ℤand ℬ={P_i^0 | i∈𝒜}⊂ℝ^3be control points. If R⊂ℝ^3 is a set for which there is a sequence ω^(1),ω^(2),⋯ of weights so thatlim_τ→∞R_𝒜,ω^(τ),ℬ=R,then there are a regular decomposition S̅_λ of 𝒜̅ induced by a lifting function λ and weights ω={ω_i^0>0 | i∈𝒜}, such that R is a regular control curve of NURBS curve, R=R_𝒜,ω,ℬ(S̅_λ).Proof : Let 𝒜^m={(m-1)p,(m-1)p+1,⋯,mp}⊂𝒜̅, Δ_𝒜^m=[(m-1)p,mp], ℬ^m={P_i^(n-1)(p-1)| i ∈𝒜^m } be control points and ω^m^(τ)={ω_i^(n-1)(p-1)| i∈𝒜^m} be weights for the mth piece rational Bézier curveF_𝒜^m,ω^m^(τ),ℬ^m (m=1,⋯,n) afterknot insertion for NURBS curve R_𝒜,ω^(τ),ℬ.From the assumption,lim_τ→∞F_𝒜^m,ω^m^(τ),ℬ^m=F^m,where F^m is a set of ℝ^3. By Theorem <ref>, there is regular decomposition of𝒜^m induced by a lifting function λ^m, weights ω_*^m and control points ℬ_*^m, such that F^m=F_𝒜^m,ω_*^m,ℬ_*^m(S_λ^m) is a regular control curve.Note that the NURBS curve R_𝒜,ω^(τ),ℬis coincident withR_𝒜̅,ω̅^(τ),ℬ̅, then lim_τ→∞R_𝒜,ω^(τ),ℬ=lim_τ→∞R_𝒜̅,ω̅^(τ),ℬ̅. We set 𝐑=⋃_m=1^nF^m, thenlim_τ→∞R_𝒜,ω^(τ),ℬ=lim_τ→∞R_𝒜̅,ω̅^(τ),ℬ̅=lim_τ→∞⋃_m=1^nF^m_𝒜^m,ω^m^(τ),ℬ^m=⋃_m=1^nlim_τ→∞F^m_𝒜^m,ω^m^(τ),ℬ^m =⋃_m=1^nF^m=⋃_m=1^nF^m_𝒜^m,ω_*^m,ℬ_*^m(S_λ^m).We set the lifting function λ of 𝒜 taking the same value at the lattice points of 𝒜^m, the weights ω={ω_i^0 | i∈𝒜} satisfying ω̅=⋃_m=1^nω_*^m and control points ℬ={P_i^0 | i∈𝒜} satisfying ℬ̅=⋃_m=1^nℬ_*^m after knot insertion. Let S̅_λ be the regular decomposition of 𝒜̅. By Definition <ref>,we get⋃_m=1^nF_𝒜^m,ω_*^m,ℬ_*^m(S_λ^m)=R_𝒜,ω,ℬ(S̅_λ)is a regular control curve andthis completes the proof.§ EXAMPLES LetR_𝒜,ω,ℬ(u)=∑_i=0^4ω_i^0 P_i^0N_i,2(u)/∑_i=0^4ω_i^0N_i,2(u), u∈ [0,1], be a quadraticNURBS curve defined on knot vector U^0={0,0,0,1/4,3/4,1,1,1} with the weights ω={3,2,3,2,5} and control points ℬ={P^0_0,P^0_1,P^0_2,P^0_3,P^0_4} (see Fig. <ref>). Fig. <ref> shows the NURBS curve after inserting the knots 1/4 and 3/4. Suppose that the lifting function λ has the assignments {1,2,3,2,1} at the lattice pointsof 𝒜={0,1,2,3,4}. We can define the NURBS curveR_𝒜,ω_λ(t),ℬ with parameter t by a family of weights ω_λ(t)={3t,2t^2,3t^3,2t^2,5t}. After inserting the knots 1/4 and 3/4,the NURBS curve R_𝒜,ω_λ(t),ℬ is converted to the union of three pieces of rational Bézier curves, R_𝒜̅,ω̅_λ(t),ℬ̅_λ(t)=⋃_m=1^3F_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t). The weights and control points of NURBS curve R_𝒜̅,ω̅_λ(t),ℬ̅_λ(t) can be obtained by Theorem <ref>,ω̅_λ(t) ={ω^2_0,ω^2_1,ω^2_2,ω^2_3,ω^2_4,ω^2_5,ω^2_6} ={t^λ(0)ω^0_0,t^λ(1)ω^0_1,∑_i=1^2f_1;2^i;0t^λ(i)ω _i^0,t^λ(2)ω^0_2,∑_i=2^3f_2;3^i;0t^λ(i)ω _i^0,t^λ(3)ω^0_3,t^λ(4)ω^0_4}, ℬ̅_λ(t)={P^2_0,P^2_1,P^2_2,P^2_3,P^2_4,P^2_5,P^2_6} ={P^0_0,P^0_1,∑_i=1^2f_1;2^i;0t^λ(i)ω _i^0P_i^0/∑_i=1^2f_1;2^i;0t^λ(i)ω _i^0,P^0_2,∑_i=2^3f_2;3^i;0t^λ(i)ω _i^0P_i^0/∑_i=2^3f_2;3^i;0t^λ(i)ω _i^0,P^0_3,P^0_4},where f_1;2^1;0=2/3, f_1;2^2;0=1/3, f_2;3^2;0= 1/3, f_2;3^3;0=2/3.The lifting function λ induces the assignments on 𝒜̅ by {{1,2,3},{3,3,3},{3,2,1}} andderives a regular decomposition S̅_λ={{{0,1,2}},{{2,3,4}},{{4,5,6}}} of 𝒜̅.Consider the regular control curve of the first rational Bézier curve F_𝒜^1,ω^1_λ^1(t),ℬ^1_λ^1(t) with the control points {P^2_0,P^2_1,P^2_2},weights {ω^2_0,ω^2_1,ω^2_2} and lifting function λ^1={1,2,3}. Let 𝒜^1={0,1,2} and Δ_𝒜^1=[0,2]. Since λ(1)=2<λ(2)=3,the weights ω^2_0=ω^0_0=3,ω^2_1=ω^0_1=2,lim_t→∞ω^2_2=1/3ω^0_2=1 and the control points P^2_0=P^0_0,P^2_1=P^0_1,lim_t→∞P^2_2=P^0_2.The lifting function λ^1={1,2,3} induces a regular decomposition S_λ^1={{0,1,2}} of 𝒜^1. For the subset {0,1,2}, we can construct a rational quadratic Bézier curve by the control points {P^2_0,P^2_1,P^2_2}={P^0_0,P^0_1,P^0_2} and weights {ω^2_0,ω^2_1,ω^2_2}={3,2,1}.Then the regular control curve of the first rational Bézier curve is the rational quadraticBéziercurve. Consider the regular control curve ofthe second rational Bézier curve F_𝒜^2,ω^2_λ^2(t),ℬ^2_λ^2(t) with the control points {P^2_2,P^2_3,P^2_4}, weights {ω^2_2,ω^2_3,ω^2_4} and lifting function λ^2={3,3,3}.Let 𝒜^2={2,3,4} and Δ_𝒜^2=[2,4]. Since λ(1)=2<λ(2)=3 and λ(2)=3>λ(3)=2, the weights lim_t→∞ω^2_2=1/3ω^0_2=1, ω^2_3=ω^0_2=3, lim_t→∞ω^2_4=1/3ω^0_2=1 and the control points lim_t→∞P^2_2=P^0_2, P^2_3=P^0_2, lim_t→∞P^2_4=P^0_2. The lifting function λ^2={3,3,3} induces a regular decomposition S_λ^2={{2,3,4}} of 𝒜^2. For the subset {2,3,4}, we can construct a rational quadraticBézier curve by the control points {P^2_2,P^2_3,P^2_4} and weights {ω^2_2,ω^2_3,ω^2_4}.Then the regular control curve of the second rational Bézier curve is the rational quadraticBézier curve. Since the control points P^2_2 and P^2_4 goes to P^2_3=P^0_2 while t goes to infinity, then the regular control curve degenerates into a point P^0_2. Consider the regular control curve ofthe third rational Bézier curve F_𝒜^3,ω^3_λ^3(t),ℬ^3_λ^3(t) with the control points {P^2_4,P^2_5,P^2_6}, weights {ω^2_4,ω^2_5,ω^2_6} and lifting function λ^3={3,2,1}.Let 𝒜^3={4,5,6} and Δ_𝒜^3=[4,6]. Since λ(2)=3>λ(3)=2,the weights lim_t→∞ω^2_4=1/3ω_2^0=1,ω^2_5=ω_3^0=2,ω^6_2=ω_4^0=5 and the control points lim_t→∞P^2_4=P^0_2,P^2_5=P^0_3,P^2_6=P^0_4.The lifting function λ^3={3,2,1} induces a regular decomposition S_λ^3={{4,5,6}} of 𝒜^3. For the subset {4,5,6}, we can construct a rational quadratic Bézier curve bythe control points {P^2_4,P^2_5,P^2_6}={P^0_2,P^0_3,P^0_4} and weights {ω^2_4,ω^2_5,ω^2_6}={1,2,5}. Then the regular control curve of the third rational Bézier curve is the rational quadraticBéziercurve. Then the regular control curve of the quadraticNURBS curve R_𝒜,ω,ℬ is the union of two pieces of rational quadraticBézier curves by control points {P^0_0,P^0_1,P^0_2} and weights {3,2,1}, and control points{P^0_2,P^0_3,P^0_4}and weights {1,2,5}, respectively (see Fig. <ref>). Fig. <ref> shows the degeneration process of the curve with t=2,3,5,10, respectively.Consider the quadraticNURBS curve R_𝒜,ω,ℬ inExample <ref> with the control points ℬ={P^0_0,P^0_1,P^0_2,P^0_3,P^0_4} and weights ω={3,2,3,2,5} (see Fig. <ref>). Suppose that the lifting function λ has the assignments {1,4,4,1,1} at the lattice points of 𝒜={0,1,2,3,4}.After inserting the knots 1/4 and 3/4,R_𝒜,ω_λ(t),ℬ is converted to the union of three pieces of rational quadratic Bézier curves, R_𝒜̅,ω̅_λ(t),ℬ̅_λ(t)=⋃_m=1^3F_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t). The lifting function λ induces the assignments on 𝒜̅ by {{1,4,4},{4,4,4},{4,1,1}} and then derives a regular decomposition S̅_λ={{{0,1},{1,2}},{{2,3,4}},{{4,6}}} of 𝒜̅.Consider the regular control curve of F_𝒜^1,ω^1_λ^1(t),ℬ^1_λ^1(t) with the control points {P^2_0,P^2_1,P^2_2}, weights {ω^2_0,ω^2_1,ω^2_2} and lifting function λ^1={1,4,4}. Since λ(1)=λ(2)=4,the weights ω^2_0=ω^0_0=3, ω^2_1=ω^0_1=2, lim_t→∞ω^2_2=2/3ω _1^0+1/3ω _2^0=7/3 and the control points P^2_0=P^0_0, P^2_1=P^0_1, lim_t→∞P^2_2=2/3ω _1^0P_1^0+1/3ω_2^0P_2^0/2/3ω _1^0+1/3ω _2^0= 4/7P _1^0+ 3/7P _2^0. The lifting function λ^1={1,4,4} induces a regular decomposition S_λ^1={{0,1},{1,2}} of 𝒜^1. For the subset {0,1}, we can construct alinear Bézier curve by the control points {P^2_0,P^2_1}={P^0_0,P^0_1} and weights {ω^2_0,ω^2_1}={3,2}. For the subset {1,2}, another linear Bézier curve constructed by the control points {P^2_1,P^2_2}={P^0_1, 4/7P _1^0+ 3/7P _2^0} and weights {ω^2_1,ω^2_2}={2,7/3} is obtained. We set P^*= 4/7P _1^0+ 3/7P _2^0, then the regular control curve of the first rational Bézier curve is the union of those two line segments P^0_0P^0_1∪P^0_1P^*.Consider the regular control curve of F_𝒜^2,ω^2_λ^2(t),ℬ^2_λ^2(t) with the control points {P^2_2,P^2_3,P^2_4}, weights {ω^2_2,ω^2_3,ω^2_4} and lifting function λ^2={4,4,4}. Since λ(1)=λ(2)=4 and λ(2)>λ(3), the weightslim_t→∞ω^2_2=2/3ω _1^0+1/3ω _2^0=7/3,ω^2_3=ω^0_2=3,lim_t→∞ω^2_4=1/3ω_2^0=1 and the control points lim_t→∞P^2_2= 4/7P _1^0+ 3/7P _2^0,P^2_3=P^0_2,lim_t→∞P^2_4=P^0_2. The lifting function λ^2={4,4,4} induces a regular decomposition S_λ^2={{2,3,4}} of 𝒜^2. For the subset {2,3,4}, we can construct a rational quadraticBézier curve by the control points {P^2_2,P^2_3,P^2_4}={ 4/7P _1^0+ 3/7P _2^0,P^0_2,P^0_2} and weights {ω^2_2,ω^2_3,ω^2_4} ={7/3,3,1}. Since the control points P^2_4 goes to P^2_3=P^0_2 while t goes to infinity and we set P^*= 4/7P _1^0+ 3/7P _2^0, then the regular control curve degenerates into a line segment P^*P^0_2.Consider the regular control curve of F_𝒜^3,ω^3_λ^3(t),ℬ^3_λ^3(t) with the control points {P^2_4,P^2_5,P^2_6}, weights {ω^2_4,ω^2_5,ω^2_6} and lifting function λ^3={4,1,1}.Since λ(2)>λ(3), the weights lim_t→∞ω^2_4 =1/3ω_2^0=1, ω^2_5=ω^0_3=2, ω^2_6=ω_4^0=5 and the control points lim_t→∞P^2_4 =P^0_2, P^2_5=P^0_3, P^2_6=P^0_4. The lifting function λ^3={4,1,1} induces a regular decomposition S_λ^3={{4,6}} of 𝒜^3. For the subset {4,6}, we can construct alinear Bézier curve by the control points {P^2_4,P^2_6} ={P^0_2,P^0_4} and weights {ω^2_4,ω^2_6} ={1,5}. Then the regular control curve of the first rational Bézier curve is theline segment P^0_2P^0_4. Since P^* is located inline segment P^0_1P^0_2, then the regular control curve of the quadraticNURBS curve R_𝒜,ω,ℬ is the union ofthree line segmentsP^0_0P^0_1∪P^0_1P^0_2∪P^0_2P^0_4, which is shown in Fig. <ref>.LetR_𝒜,ω,ℬ(u)=∑_i=0^4ω_i^0 P_i^0N_i,3(u)/∑_i=0^4ω_i^0N_i,3(u), u∈ [0,1],be a cubic NURBS curve defined on knot vector U^0={0,0,0,0,1/3,1,1,1,1} with the weights ω={1,4,1,4,1} andcontrol points ℬ={P^0_0,P^0_1,P^0_2,P^0_3,P^0_4} (see Fig. <ref>). Suppose that the lifting function λ has the assignments {1,4,2,1,1} at the lattice points of 𝒜={0,1,2,3,4}. R_𝒜,ω_λ(t),ℬ is converted to the union of two pieces of rational cubicBézier curves,R_𝒜̅,ω̅_λ(t),ℬ̅_λ(t)=⋃_m=1^2F_𝒜^m,ω^m_λ^m(t),ℬ^m_λ^m(t). The weights and control points of R_𝒜̅,ω̅_λ(t),ℬ̅_λ(t) areω̅_λ(t)={ω^2_0,ω^2_1,ω^2_2,ω^2_3,ω^2_4,ω^2_5,ω^2_6} ={t^λ(0)ω^0_0,t^λ(1)ω^0_1,∑_i=1^2f_1;2^i;2t^λ(i)ω _i^0, ∑_i=1^3f_1;3^i;0t^λ(i)ω _i^0,∑_i=2^3f_2;3^i;1t^λ(i)ω _i^0,t^λ(3)ω^0_3,t^λ(4)ω^0_4}, ℬ̅_λ(t)={P^2_0,P^2_1,P^2_2,P^2_3,P^2_4,P^2_5,P^2_6} ={P^0_0,P^0_1,∑_i=1^2f_1;2^i;2t^λ(i)ω _i^0P_i^0/∑_i=1^2f_1;2^i;2t^λ(i)ω _i^0, ∑_i=1^3f_1;3^i;0t^λ(i)ω _i^0P_i^0/∑_i=1^3f_1;3^i;0t^λ(i)ω _i^0,∑_i=2^3f_2;3^i;1t^λ(i)ω _i^0P_i^0/∑_i=2^3f_2;3^i;1t^λ(i)ω _i^0,P^0_3,P^0_4}.The lifting function λ induces the assignments on 𝒜̅ by {{1,4,4,4},{4,2,1,1}}, and then derives a regular decomposition S̅_λ={{{0,1},{1,2,3}},{{3,6}}} of 𝒜̅.Consider the regular control curve of F_𝒜^1,ω^1_λ^1(t),ℬ^1_λ^1(t) with the control points {P^2_0,P^2_1,P^2_2,P^2_3},weights {ω^2_0,ω^2_1,ω^2_2,ω^2_3} and lifting function λ^1={1,4,4,4}. The lifting function λ^1 induces a regular decomposition S_λ^1={{0,1},{1,2,3}} of 𝒜^1={0,1,2,3}. The regular control curveis the union ofa linear Bézier curveby the control points {P^2_0,P^2_1} and a rational quadratic Bézier curveby the control points {P^2_1,P^2_2,P^2_3} and their corresponding weights. Since λ(1)>λ(2)>λ(3), the control points P^2_0=P^0_0, P^2_1=lim_t→∞P^2_2=lim_t→∞P^2_3=P^0_1, then the regular control curve degenerates into a line segment P^0_0P^0_1. Consider the regular control curve of F_𝒜^2,ω^2_λ^2(t),ℬ^2_λ^2(t) with the control points {P^2_3,P^2_4,P^2_5,P^2_6}, weights {ω^2_3,ω^2_4,ω^2_5,ω^2_6} and lifting function λ^2={4,2,1,1}. The lifting function λ^2 induces a regular decomposition S_λ^2={{3,6}} of 𝒜^2={3,4,5,6}. The regular control curveis alinear Bézier curveby the control points{P^2_3,P^2_6}. Since λ(1)>λ(2)>λ(3), the control pointslim_t→∞P^2_3=P^0_1, lim_t→∞P^2_4=P^0_2, P^2_5=P^0_3, P^2_6=P^0_4, then the regular control curve degenerates into a line segment P^0_1P^0_4.Then the regular control curveof the cubicNURBS curve R_𝒜,ω,ℬ is the union of two line segmentsP^0_0P^0_1∪P^0_1P^0_4 (see Fig. <ref>). Fig. <ref> shows the degeneration process of the curve with t=2,10,20,30, respectively.LetR_𝒜,ω,ℬ(u)=∑_i=0^8ω_i^0 P_i^0N_i,5(u)/∑_i=0^8ω_i^0N_i,5(u), u∈ [0,1],be a quintic NURBS curve defined on knot vectorU^0={0,0,0,0,0,0,1/4,1/3,1/2,1,1,1,1,1,1} withthe weights ω={1, 2, 3, 2, 1, 3, 2, 1, 2} and control points ℬ={P^0_0,P^0_1,P^0_2,P^0_3,P^0_4,P^0_5,P^0_6,P^0_7,P^0_8}, thecurve is shown in Fig. <ref>. Suppose that the lifting function λ={2, 1, 1, 3, 1, 2, 3, 2, 1}, the regular control curve of the quinticNURBS curve after the degeneration is shown in Fig. <ref>, which is the union of a rational quadratic Bézier curve and two line segments. Fig. <ref> shows the degeneration process of the curve with t=2,5,10,20, respectively. In the next three examples, we indicate the application of our results for shape deformation. Through the toric degeneration of NURBS curve, if a lifting function is given, then the limit of NURBS curve R_𝒜,ω_λ(t),ℬ is determined. It means that if we choose the lifting functions properly, then the original curve (composed of NURBS curves) can be deformed to the target curve (composed of regular control curves of the NURBS curves).Furthermore, our results also point out the potential application for computer animation. Figure <ref> shows the shape deformation of a wooden club to a tie by using toric degenerations of NURBS curves.The wooden club (see Figure<ref>) is composed of two pieces of NURBS curves onknot vectors{0,0,0,0,1/2,1,1,1,1} and {0,0,0,0,1/3,2/3,1,1,1,1}. The lifting functions correspond to these two pieces of NURBS curves are λ_1={1, 2, 2, 2, 2, 1} and λ_2={1, 3, 4, 3, 1}.Then the limit of the wooden club is a tie (see Figure<ref>). Figure <ref> shows the shape deformation processes of a vase. The vase (see Figure<ref>) is composed of two pieces of NURBS curves onknot vector{0,0,1/2,1,1}, and two pieces of NURBS curves onknot vector {0,0,0,0,0,0,0,0,1/2,1,1,1,1,1,1,1,1}. The lifting functions correspond to these four pieces of NURBS curves are λ_1=λ_2={1,1,1} and λ_3=λ_4={2,1,1,1,3,1,1,1,2}, respectively. In the degeneration process of NURBS curves,the fat vasetransforms into a thin vase (see Figure<ref>).Figure <ref> shows the shape deformation processes of a bear's face (see Figure<ref>). Each ear of the bear is a quadratic NURBS curvedefined onknot vector{0,0,0,1/2,1,1,1} with the lifting functionλ_1={1,2,1,1}. Each eye of the bear isa cubic NURBS curve defined onknot vector{0,0,0,0,1/4,1/2,3/4,1,1,1,1} with the lifting function λ_2={2,1,1,3,1,1,2}. The rest of the bear's face is composed ofthree pieces of NURBS curvesonknot vector{0,0,0,1/3,2/3,1,1,1} with the lifting functionλ_3={1,1,2,1,1}. After the toric degenerations of NURBS curves, the bear's face degenerates toa fox's face (see Figure<ref>). § CONCLUSIONIn this paper, we define a regular control curve of a NURBS curve byregular decomposition and propose the geometric meaning of this control curve.The regular control curve is the limit of a NURBS curve when the control points and weights are fixed but the parameter t →∞. Conversely, the regular control curve is also a curve which is the limit of a NURBS curve with control points, but differing weights. If the regular decompositions induced by different lifting functions, then the limit curves of the NURBS curve (i.e., regular control curves) are different (see Example <ref>). The control polygon of a NURBS curve is a regular control curve when the NURBS curve reduced by a certain regular decomposition (see Example <ref>(2)). This paper also improves the geometric meaning of weights of NURBS curve,the curve tends to the regular control curve we defined when all of weights approach infinity. In Example <ref>, the NURBS curve arises a self-intersection in the toric degeneration process(see Fig. <ref>) and then our work provides possible application for checking the injectivity of NURBS curve. Moreover, we also provide an idea for shape deformation of NURBS curves by the presented results. We will study the application of the toric degeneration of NURBS curve for animation in future.§ ACKNOWLEDGMENTSThis work is partly supported by the National Natural Science Foundation of China (Nos. 11671068, 11601064, 11271060, 11290143), Fundamental Research of Civil Aircraft (No. MJ-F-2012-04), and the Fundamental Research Funds for the Central Universities (Nos. DUT16LK38, DUT17LK09).§ REFERENCES00Piegl0 Piegl L, Tiller W.The NURBS Book, 2nd ed. Berlin, Heidelberg and New York: Springer-Verlag, 1997.Piegl2 Piegl L. Modifying the shape of rational B-spline, Part 1: curves. Computer-Aided Design 1989;21(8):509-518.Piegl3 Piegl L. Modifying the shape of rational B-splines, Part 2: surfaces. Computer-Aided Design 1989; 21(9):538–546.Farin0 Farin G. Curve and Surfaces for CAGD: A Practical Guide. Morgan Kaufmann Publishers, 2002.Farin1 Farin G, Hoschek J, Kim MS.Handbook of Computer Aided Geometric Design. Elsevier Science, 2002. Au Au CK,Yuen MMF. Unified approach to NURBS curve shape modification. Computer-Aided Design 1995;27(2):85-93.Sanchez-Reyes Sánchez-Reyes J.A simple technique for NURBS shape modification. IEEE Computer Graphics and Applications 1997; 17(1):52-59.Imre Juhász I.Weight-based shape modification of NURBS curves. Computer Aided Geometric Design 1999; 16(5):377-383.Zhang Zhang GH, Yang XQ, Zhang CM. Weight-based shape modification of NURBS curves. Journal of Computer Aided Design Computer Graphics 2004; 16(10):1386-1400.Sturmfels Sturmfels B. Gröbner Bases and Convex Polytopes, Providence: American Mathematical Society, 1996.Krasauskas Krasauskas R.Toric surface patches. Advances in Computational Mathematics 2002; 17(1-2):89-113.Garcia García-Puente LD, Sottile F, Zhu CG. Toric degenerations of Bézier patches. ACM Transactions on Graphics (TOG) 2011; 30(5):110.Zhu Zhu CG. Degenerations of toric ideals and toric varieties. Journal of Mathematical Analysis and Applications 2012; 386(2):613-618. Zhao Zhu CG,Zhao XY. Self-intersections of rational Bézier curves. Graphical Models 2014; 76(5):312-320.Zhao2015 Zhao XY, Zhu CG. Injectivity conditions of rational Bézier surfaces. Computers & Graphics 2015; 51:17-25.Boehm1980 Boehm W.Inserting new knots into B-spline curves. Computer-Aided Design 1980; 12(4):199-201.
http://arxiv.org/abs/1706.08262v1
{ "authors": [ "Yue Zhang", "Chun-Gang Zhu" ], "categories": [ "cs.GR", "65D17", "I.3.5" ], "primary_category": "cs.GR", "published": "20170626074055", "title": "Degenerations of NURBS curves while all of weights approaching infinity" }
On the construction and convergence of traces of forms Hichem BelHadjAli[Department of Mathematics, I.P.E.I.N. Uni. Carthage, Tunisia. E-mail: [email protected]], Ali BenAmor[Institute of transport and logistics. Uni. Sousse, Tunisia. E-mail: [email protected]], Christian Seifert[TU Hamburg, Institut für Mathematik, Am Schwarzenberg-Campus 3 E, 21073 Hamburg, Germany. E-mail: [email protected]], Amina Thabet[Department of Mathematics, Faculty of Sciences of Gabès. Uni. Gabès, Tunisia. E-Mail: [email protected]]December 30, 2023 ==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Dedicated to the memory of Johannes F. Brasche.We elaborate a new method for constructing traces of quadratic forms in the framework of Hilbert and Dirichlet spaces. Our method relies on monotone convergence of quadratic forms and the canonical decomposition into regular and singular part. We give various situations where the trace can be described more explicitly and compute it for some illustrating examples. We then show that Mosco convergence of Dirichlet forms implies Mosco convergence of a subsequence of their approximating traces.MSC 2010: 47A07, 46C05, 46C07, 47B25, 46E30.Keywords: trace of forms, Dirichlet forms, Mosco convergence§ INTRODUCTION In this paper we study the construction of traces of closed positive quadratic formsin Hilbert spaces with respect to some given linear operator J. By this we mean, starting with a closed positive quadratic form with domain in some Hilbert spaceand a linear operator J with domain in the same spacebut having values in some other auxiliary Hilbert space , we shall construct a new closed quadratic form in . Let us stress that the mentioned problem is not new and there are various methods for constructing such a form in the literature, see <cit.>. The most general construction can be found in<cit.>, where the authors construct an operator instarting fromand J and then of course the form. The novelty in our method consists in following the converse strategy as follows: starting with a form inwe construct the so-called trace form inand its associated operator simultaneously, by means of approximating forms. However, we will show in Theorem <ref> that both constructions lead in fact to the same object. Besides we shall also focus on explicit computation of the obtained form.Let us explain our method. Instead of using Kato–Lions method for forms we make use of monotone convergence of quadratic forms together with their canonical decomposition into a regular part and a singular one, see <cit.>. This method of construction seems not to exist in the literature. The main input at this stage is a Dirichlet principle consisting in describing the approximating forms in a variational way. Thanks to this method we are able to compute explicitly traces of forms in many general circumstances.In the special case of Dirichlet spaces we show, with a short and analytic proof, that traces of regular Dirichlet forms are regular Dirichlet forms as well. We also show that Mosco-convergence of Dirichlet forms yields Mosco-convergence of a subsequence of approximating trace forms. We refer to <cit.> for the corresponding notion (which will be recalled in Section <ref> below as well). At this stage we shall make use of the theory of convergence of sequences of Hilbert spaces and its corollaries elaborated in <cit.>.The concept of traces of forms goes back to Fukushima-Oshima-Takeda <cit.>, where the authors initiate the construction, investigate the trace form and relate it to part of processes. However, many proofs, especially in the non-transient case, are based on arguments making use of the theory of stochastic processes. We aim for analytic arguments. Recently the subject gained much more interest due to a generalization of the form method by Arendt and ter Elst <cit.>. Since then there has been various studies of properties of traces of sectorial forms in Hilbert spaces. In <cit.> the authors rely their construction on a hidden compactness condition yielding ellipticity for the form. In <cit.> the construction of the trace of _1, the formshifted by 1, is given. We will make use of the traces of _λ for all λ>0 given in this way and then take the appropriate limit for λ→ 0. Ter Elst, Sauter and Vogt in <cit.> proved a generation theorem for accretive forms under the assumption that J is bounded with dense range, which extends the results of <cit.>. In <cit.>, Post used so-called boundary pairs (referring to the case that J has a dense kernel) to construct a family of operators related to the associated operator to the trace form. Moreover, there are applications in the context of Dirichlet forms and singular diffusions, see <cit.>.Traces of quadratic forms have a wide range of applications in a variety of fields. Let us cite, among others, their connection to parts of stochastic processes established in <cit.>, their relationship to the construction of Dirichlet-to-Neumann operators <cit.> and of fractional powers of the Laplacian <cit.>. Traces of forms also appear in the study of problems related to large coupling convergence and spectral asymptotics <cit.>.The paper is organized as follows. In Section <ref> we introduce the setup for quadratic forms in Hilbert spaces, prove a Dirichlet principle for the approximating forms and construct the trace via monotone convergence and regular parts. We then focus on special situations, where we can compute the trace more explicitly. In Section <ref> we apply our method to various examples and calculate the corresponding traces. This includes the square root of the Laplacian as obtained in <cit.> revisited in the context of forms, but also traces on (maybe small) subsets, wich can correspond to singular diffusions; cf. <cit.>. Starting from Section <ref> we focus on Dirichlet forms. First, we show that the trace of a regular Dirichlet form is a regular Dirichlet form again (when interpreted in the right space). We also relate our method of construction with the probabilistic one in <cit.>, and show that these two traces coincide. The final Section <ref> is devoted to properties of sequences of Dirichlet forms. Here we prove that Mosco convergence implies Mosco convergence of a subsequence of approximating trace forms.§ TRACES OF QUADRATIC FORMS IN HILBERT SPACESLet , be two Hilbert spaces. Let (·,·) and (·,·)_ denote the scalar products onand , respectively. Letbe a closed positive quadratic form with domain ⊆. For u∈ we abbreviate [u]:=(u,u) and for every λ>0 set_λ[u] := [u] + λu^2.Assume we are given a linear operator J J⊆→ with dense range such that J is closed in (,_1^1/2). For λ>0 we define J_λ J⊆ (,_λ^1/2)→ by J_λ u :=Ju. Let ( J_λ)^⊥__λ be the _λ-orthogonal complement of J_λ and let P_λ the _λ-orthogonal projection onto ( J_λ)^⊥__λ.For λ>0 we construct a new family of closed positive densely defined quadratic forms as follows (see <cit.>)_λ :=J,_λ[Ju]:= _λ[P_λ u] for allu∈ J. Let Ȟ_λ be the positive self-adjoint operator associated with _λ. We emphasize that, if moreover J is densely defined then from <cit.> once again we obtainȞ_λ = (J_λ J_λ^*)^-1. We start with a result that is of major importance for our construction of traces of quadratic forms and which expresses the variational aspect of the forms _λ.[Dirichlet principle]Let λ>0, u∈ J. Then_λ[Ju] = inf{_λ[v]:v∈ J, Jv=Ju}.Moreover, _λ≤_μ for λ≤μ. In the proof of the Dirichlet principle we will make use of the following lemma.Let (,q) be a Hilbert space, J J⊆→ linear and closed. Let P be the q-orthogonal projection onto ( J)^_q. Let u∈ J. Then Pu∈ J and Ju = JPu.Note that closed linear operators have closed kernels. Hence, J is closed. Since P is an q-orthogonal projection, we obtainu-Pu ∈ ( P)^_q = (( J)^_q)^_q =J =J⊆ J.Since u∈ J, we obtain Pu∈ J and Ju - JPu = J(u-Pu) = 0.By Lemma <ref> we have P_λ u∈ J, and Ju=JP_λ u. Thus,inf{_λ[v]:v∈ J, Jv=Ju}≤_λ[P_λ u]=_λ[Ju].On the other hand, owing to the fact that P_λ is an orthogonal projection w.r.t. _λ we get_λ[v]≥_λ[P_λ v] for allv∈ J.Now if v∈ J and Jv=Ju then we obtain P_λ v = P_λ u and thereforeinf{_λ[v]:v∈ J, Jv=Ju}≥_λ[P_λ u]= _λ[Ju].Since (_λ)_λ>0 is a monotone increasing family, also (_λ)_λ is monotone increasing.Let Q be a densely defined positive quadratic form on a Hilbert space . Then Q can be uniquely decomposed into Q=Q_ + Q_ sing, such that Q_ is the largest positive densely defined closable quadratic form dominated by Q. In particular, if Q is closable then Q_=Q. The form Q_ is called the regular part of Q. See <cit.> for more details on this decomposition.There exists a positive self-adjoint operator Ȟ insuch thatlim_λ↓ 0(Ȟ_λ +1)^-1=(Ȟ + 1)^-1 strongly.Furthermore, defining _0 inby_0 :=J,_0[Ju]:= lim_λ↓ 0_λ[Ju] for allu∈ J,then Ȟ is the self-adjoint operator associated with the closure of (_0)_. In particular, if _0 is closable then Ȟ is the self-adjoint operator associated with the closure of _0. For λ>0 the form _λ is densely defined, positive and closed. By Theorem <ref> the family (_λ)_λ is monotone increasing. Making useof <cit.> we conclude that there is a positive self-adjoint operator in , which we denote byȞ, such thatlim_λ↓ 0(Ȟ_λ +1)^-1=(Ȟ + 1)^-1 strongly.Moreover from <cit.> we infer that Ȟ is the self-adjoint operator associated with the closure of (_0)_. The last claim of the theorem follows from the definition of the regular part of a quadratic form. From now on we letbe the densely defined positive closed quadratic form associated to Ȟ via the second representation theorem <cit.>::=:=Ȟ^1/2,[ψ]:=(Ȟ^1/2ψ,Ȟ^1/2ψ)_.We shall callthe trace ofwith respect to J. Note that = (_0)_. Let us quote that from the definition of the regular part we have J⊆. Hence the domain ofis the closure of J w.r.t. √([·] +·^2_).(a) Let λ>0. One may ask whether the trace of _λ agrees with _λ from (<ref>). In Proposition <ref> we will show that the construction is consistent.(b) Since strong resolvent convergence of the associated operators is equivalent to Mosco convergence of the corresponding positive quadratic forms we can rephrase Theorem <ref> such thatis the Mosco limit of (_λ)_λ>0 as λ decreases to 0.(c) The operator Ȟ is characterized byȞ= {ψ∈: ∃ϕ∈:(ψ,φ)=(ϕ,φ)_for all φ∈},Ȟψ= ϕ.Since J is a core for , the domain of Ȟ is also given byȞ = {ψ∈: ∃ϕ∈: (ψ,Jv)=(ϕ,Jv)_for all v∈ J }. (d) If =, J dense inand J (and hence also J_λ) is the natural embedding J J∋ u↦ u∈, then = |_ J. Indeed, we then obtain P_λ u = u for all u∈ J and λ>0 and _λ = _λ|_ J for all λ>0. Thus, if J is a core forthen =.(e) In caseis a positive form, but not necessarily closed, we can first consider the closure _ of its regular part and then apply Theorem <ref> to obtain its trace. We now show that our construction agrees with the one obtained in <cit.>.Let a be a form defined by a:= J, a[u]:=[u] for u∈ J. Let A be the operator associated with (a,J) according to <cit.>. Then A=.First, note that a is J-sectorial sinceis positive, and J( J) =J is dense inby assumption on J. Further, since a is symmetric and positive, A is self-adjoint and positive by <cit.>.(i) Let λ>0. Define a_λ:=_λ|_ J. Then a_λ is J-sectorial and J( a_λ) =J is dense in . Let A_λ be the operator associated with (a_λ,J) according to <cit.>. By <cit.>, A_λ is self-adjoint. We show that A_λ = _λ. Indeed, let x∈_λ⊆_λ =J. Let u∈ J such that Ju=x. Then for v∈ J we obtain(_λ x,Jv)_ = _λ(x,Jv) = _λ(Ju,Jv) = _λ(P_λ u,P_λ v) = _λ(P_λ u,v).By Lemma <ref>, P_λ u∈ J and x=Ju = JP_λ u. Define u_n:=P_λ u for all n∈. Then Ju_n = x for all n∈,sup_n∈ a_λ[u_n] = sup_n∈_λ [P_λ u] ≤_λ[u],andlim_n→∞ a_λ(u_n,v) = lim_n→∞_λ(P_λ u,v) = (_λ x,J(v))_for all v∈ J =a_λ. Hence, x∈ A_λ and A_λ x = _λ x, i.e. _λ⊆ A_λ. Since both operators are self-adjoint, they are equal.(ii) Note that a_λ =a and a_λ[u]-a[u] = λu^2≥ 0 for all u∈ a. Moreover, lim_λ→ 0 a_λ [u] = a[u] for all u∈ a, and J( a_λ) = J( J) =J is dense in . By <cit.>, we havelim_λ→ 0 (A_λ+1)^-1 = (A+1)^-1strongly. (iii) By Theorem <ref>, we havelim_λ→ 0 (_λ+1)^-1 = (+1)^-1strongly.Since strong limits are unique we obtain A=.(a) Note that the construction in <cit.> is valid for more general situations than we consider here; J just needs to be linear with dense range and a only needs to be a J-sectorial sesquilinear form. However, then the operator associated with (a,J) is described in a somewhat implicit form in <cit.>. (b) In case J = and J_1 is bounded on (,_1^1/2), we can also apply <cit.> to obtain a self-adjoint operator (which is actually Ȟ) associated with (,J), and then obtainas the form associated with this operator. Note that sinceis symmetric and hence J-sectorial, the constructions in <cit.> and <cit.> agree. Next we proceed to show that our construction is consistent. We start by showing the Dirichlet principle for the form _0, analogously to Theorem <ref>.Let u∈ J. Then_0[Ju]=inf{[v]:v∈ J, Jv=Ju}. By definition of _0 and Theorem <ref> we obtain_0[Ju] = lim_λ↓ 0_λ[Ju] = lim_λ↓ 0inf{_λ[v]:v∈ J, Jv=Ju}≥inf{[v]:v∈ J, Jv=Ju}.Conversely, let v∈ J such that Jv=Ju. Theninf{_λ[w]:w∈ J, Jw=Ju}≤_λ[v]for all λ>0. Passing to the limit leads to _0[Ju]≤[v]. Thus,_0[Ju] ≤inf{[v]:v∈ J, Jv=Ju},which finishes the proof. The following proposition is actually a consequence of Theorem <ref> and <cit.>. However, we shall give an independent proof. Let β>0. The trace of_β is the form_β as given by (<ref>). In other words,_β=lim_λ↓ 0_β+λ.Let u∈ J. Applying Lemma <ref> to _β (instead of ) and taking into account Theorem <ref> we obtainlim_λ↓ 0_β+λ[Ju]=inf{_β[v]: v∈ J, Jv=Ju}=_β[Ju].Since _β is closed, Theorem <ref> yields lim_λ↓ 0_β+λ =_β. The following result expresses the fact that some properties of the operator Ȟ are strongly related to those of J_1. A similar result can be found in <cit.> (note that the corresponding construction of traces is different).Let J = and J_1 (,_1)→ be bounded.(a) Let _0 be closed and J_1 compact. Then Ȟ has compact resolvent.(b) Let Ȟ have compact resolvent. Then J_1 is compact.(a)Note that = _0 since _0 is closed. In particular, =J. Let us set()_1:= +·_^2.It is well known that the operator Ȟ has compact resolvent if and only if the embedding i ( J, ()_1^1/2)→ is compact. By the boundedness assumption for J_1 and the definition of _1, we obtainJu_^2=JP_1u_^2≤J_1^2 _1[P_1u]= J_1^2 _1[Ju],for all u∈. Thus, ()_1 is _1-bounded. Since both ( J,()_1^1/2) and ( J,_1^1/2 ) are Hilbert spaces, the latter inequality together with the open mapping theorem yield equivalence of the norms ()_1^1/2 and _1^1/2. As by assumption J_1 is compact then according to<cit.>, J_1J_1^*=Ȟ_̌1̌^-1 is compact as well, which in turn yields compactness of the embedding ( J,_1^1/2)→. Accordingly the embedding ( J,()_1^1/2)→ is compact and Ȟ has compact resolvent.(b)Note that 0≤≤_1 and therefore0≤ (Ȟ_1 +1)^-1≤ (Ȟ +1)^-1.Thus, we obtain(Ȟ_1+1)^-1/2ψ_≤(Ȟ+1)^-1/2ψ_for all ψ∈_.Since (Ȟ+1)^-1 is compact, also (Ȟ+1)^-1/2 is compact. Hence, (Ȟ_1+1)^-1/2 is also compact, which in turn implies the compactness of (Ȟ_1 +1)^-1. Therefore, also Ȟ_1^-1=J_1J_1^* is compact. By <cit.>, J_1J_1^* is compact if and only if J_1 is compact.We shall show in Remark <ref> that the form _0 is closed in caseis J-elliptic. Thus, Theorem <ref>(a) is a generalization of <cit.>. § SOME SPECIAL SITUATIONS FOR CONSTRUCTIONS OF TRACESIn this section we provide concrete relevant situations, in which the trace form is computed explicitly. We start with the following situation. In many applications, especially from PDEs, it may happen that the quadratic formdefines a scalar product on . For this particular situation, we shall give an explicit description of the trace form. Let _ be theabstract completion ofw.r.t. (·)^1/2. Then the quadratic formextends in a natural way to a bounded quadratic form on the Hilbert space (_,) which we still denote by . Suppose that J is -closed. Let ( J)^⊥_ be the -orthogonal complement of J in the Hilbert space (_,) and let P the -orthogonal projection onto ( J)^⊥_. In this framework we construct a form Q, as before, byQ :=J, Q[Ju] := [Pu] for allu∈ J,analogously to (<ref>). Obviously, Q is well-defined.Letdefine a scalar product on . Assume that J is -closed and let Q be the quadratic form defined by (<ref>). Then = Q_. Moreover, if J J⊆ (_,)→ is closed then = Q. Since P is the -orthogonal projection onto ( J)^⊥, for u∈ J we have u-Pu∈ J and therefore Pu∈ J and Ju = JPu. (This is essentially Lemma <ref>; there we only used that J is closed.) Consequently, the Dirichlet principle still holds true. Hence, together with Lemma <ref> this yields for u∈ J Q[Ju] = [Pu] = inf{[v]:v∈ J, Jv=Ju} = _0[Ju].Thus, Q=_0. By Theorem <ref>, we achieve Q_=.Now, assume that J J⊆ (_,)→ is closed. Then J is closed (w.r.t. ). Hence mimicking the proof of <cit.> we conclude that the quadratic form Q defined by (<ref>) is closed. Hence Q = Q_ =. Towards providing other situations for which an explicit computation ofis still possible we introduce the vector space:={u∈ J: (u,v)=0 for all v∈ J}.Assume that J decomposes intoa direct sumJ=⊕ J.For each u∈ J let E_ u be the unique element insuch thatu= E_ u + (u-E_ u),where the decomposition is unique. Then E_ is the projection from J ontoalong J, and E_ u can be interpreted as a abstract `harmonic extension' of Ju for u∈ J; cf. Example <ref> for a similar construction inspiring the name. Define _ inby_ := J,_[Ju] := [E_ u] for allu∈ J.Clearly, _ is then well-defined (by the direct sum assumption u∈ J implies E_ u = 0).Assume J=⊕ J. Let u∈ J. Then(E_ u,v) = (u,E_ v)for allv∈ J. Let v∈ J. Then (E_ u,v-E_ v) = 0. Thus(E_ u,v) = (E_ u, E_ v) + (E_ u, v-E_ v) = (E_ u, E_ v).By symmetry,(E_ u,v) = (E_ u, E_ v) = (u, E_ v).Mimicking the proof of Proposition <ref> we obtain:Assume J=⊕ J. Then the trace formcoincides with the closure of the regular part of _. Here is a sufficient condition for _ to be closed and hence for = _. Let J = ⊕ J. Assume that (,) is a Hilbert space. Then _ is closed.Note that (,_1)∋ u↦ u ∈(,) is a contractive bijection between Banach spaces, hence has a continuous inverse. Thus, J is -closed.Let (u_n) in J such that (Ju_n) is a Cauchy-sequence for _ and Ju_n→ũ infor some ũ∈. Then[E_ u_n - E_ u_m] = _[Ju_n-Ju_m] → 0 (m,n→∞),so (E_ u_n) is a Cauchy-sequence for . Since (,) is a Hilbert space, there exists u∈ such that E_ u_n→ u in . Note that E_ u = u. Since Ju_n = JE_ u_n for all n∈ and J is -closed, we obtain u∈ J and Ju = ũ. Hence, ũ = Ju∈_ and _[Ju_n-Ju] = [E_ u_n-E_ u] = [E_ u_n-u]→ 0. Thus, _ is closed. For an application of the situation in Lemma <ref> see <cit.>.By means of Proposition <ref> we can now handle the following case. Assume that J is dense inand define the form _ inby_ :=J,_[u] := [u] for allu∈_Then _ is closed. Indeed, let (u_n) in _, u∈, _[u_n-u_m]→ 0 (m,n→∞), u_n→ u in . Then [u_n-u_m]→ 0. Sinceis closed, we obtain u∈ and [u_n-u]→ 0. Since J_1 is closed and Ju_n = 0→ 0 we obtain u∈ J = _. Thus,_[u_n-u] = [u_n-u]→ 0.Let L_ be the positive self-adjoint operator associated with _. Assume that∩ J={0}.Then L_ is injective. Indeed, let u∈ L_⊆ J ⊆ such that L_ u = 0. Then, for v∈ J we obtain(u,v) = (L_ u, v) = 0.Thus, u∈, and therefore u∈∩ J = {0}. Assume that L_ is surjective. For u∈ and λ>0 setv_λ := λ L_^-1P_λ u + P_λ u. Let J be dense in , ∩ J={0} and L_ surjective.(a) Let u∈ J, λ>0. Then v_λ∈ and Jv_λ=Ju. Furthermore, v_λ is λ-independent.(b) J = ⊕ J. Moreover, E_ u = v_1 for u∈ J.Since L_^-1 =L_⊆ J and P_λ u∈ J for all u∈ J by Lemma <ref>, we get v_λ∈ J and Jv_λ=JP_λ u=Ju.(a) Let v∈ J (=_). Then P_λ u __λ v and therefore(v_λ,v) = λ(L_^-1P_λ u,v) + (P_λ u,v) = λ_(L_^-1P_λ u,v) - λ(P_λ u,v)= λ(P_λ u,v)-λ(P_λ u,v) = 0.Thus, v_λ∈. Hence, for λ,λ'>0 we obtain v_λ-v_λ'∈∩ J={0}.(b) Let u∈ J. Then u = P_λ u + u-P_λ u = v_λ - λ L_^-1P_λ u + u-P_λ u, where v_λ∈ and u-v_λ=-λ L_^-1P_λ u + u-P_λ u∈ J. Hence, making use of assertion (a) we obtain J = ⊕ J.Observing that u=v_1 + (u-v_1)∈⊕ J yields E_ u = v_1 by definition of E_. Let J be dense in , ∩ J={0} and L_ surjective. Then =(_)_.By Lemma <ref> we have J = ⊕ J. Now the result follows from Proposition <ref>. Proposition <ref> is inspired from the construction of the trace of the quad­ratic form associated with the Neumann-Laplacian on bounded open subsets of ^n with Lipschitz boundary (see e.g. <cit.>). The corresponding operator is then the Dirichlet-to-Neumann operator. As a next step we shall give another general case where J=⊕ J is fulfilled and hence Proposition <ref> can be applied. Consider the form ^J indefined by^J :=J,^J[u] :=[u]+Ju_^2 for allu∈^J.Then={u∈ J: ^J(u,v)=0 for allv∈ J}.Assume that ^J defines a scalar product on J. Let us denote by ^J the ^J-completion of J and by P^J the ^J-orthogonal projection onto the ^J-orthogonal complement of J.Assume that ^J defines a scalar product on J and J is ^J-closed. Then J=⊕ J. Moreover,_[Ju] = [P^Ju] for allu∈ J. Byassumption we obtain ∩ J={0}. Thus, we have to show that every u∈ J admits a decomposition. Let u∈ J. As, by assumption, J is ^J-closed we obtain u-P^J u∈ J⊆ J. Consequently, P^Ju∈ J. Hence, P^Ju∈ and u=P^Ju + u-P^Ju. Therefore, we obtain J=⊕ J.It remains to prove that P^Ju=E_ u for all u∈ J. Let u∈ J. Then u admits a unique decomposition u = E_ u + (u-E_ u) with E_ u∈. Since u = P^Ju + (u-P^Ju) with P^J u∈, we observe E_ u = P^Ju.(a) Assume thatis J-elliptic, i.e. J is everywhere defined and bounded onand there exist β∈, α>0 such that[u] + βJu_^2≥α_1[u] for all u∈.Then ^J yields a scalar product onand J is ^J-closed. Thus, J is ^J-closed. Hence, applying Proposition <ref> and then Proposition <ref> we obtain = (_)_. Moreover, a straightforward computation shows that the form _ is closed, and therefore= _0 = _.(b) Assume that ∩ J={0}. Then ^J defines a scalar product on J. Indeed, for u∈ J with ^J[u] = 0 we obtain u∈ J and [u] = 0. Hence, by the Cauchy-Schwarz inequality|(u,v)| ≤[u]^1/2[v]^1/2 = 0for all v∈ J and therefore u∈. Hence, u=0. § EXAMPLESIn this section we work out some examples to illustrate our method for constructing traces of forms. Let Ω,Ω_0⊆^d be open and bounded with boundaries Γ:=∂Ω and Γ_0:=∂Ω_0 such that Ω_0⊆Ω. Assume that Γ_0, Γ are C^1. Consider the quadratic formin L^2() given by:= H_0^1(Ω),[u]:=∫_ |∇ u|^2,and let J→ L^2(Ω_0), Ju:=u|_Ω_0. Then (,) is a Hilbert space. Thus, we can constructby means of Proposition <ref>. Let P be the -orthogonal projection onto the -orthogonal complement of J. Obviously,( J)^⊥_ = {u∈ H_0^1(Ω): Δ (u|_Ω∖Ω_0) = 0},and for each u∈ H_0^1(Ω) we have that Pu is the unique element in H^1_0(Ω) such that Δ (Pu|_Ω∖Ω_0)=0 and Pu|_Ω_0=u|_Ω_0. The trace formis given by=J = H^1(Ω_0), (Ju,Jv) = (Pu,Pv) = (Pu,v) for allu,v∈ H_0^1(Ω).Applying Green's formula, we derive(Ju,Jv) = ∫_Ω_0∇ Pu·∇ v + ∫_Ω∖Ω_0∇ Pu·∇ v= ∫_Ω_0∇ u ·∇ v + ⟨γ_1^- Pu,γ_0 v⟩_H^-1/2(Γ_0),H^1/2(Γ_0),where γ_1^- Pu ∈ H^-1/2(Γ_0) is the conormal derivative of Pu|_Ω∖Ω_0 on Γ_0 and γ_0v∈ H^1/2(Γ_0) is the trace of v on Γ_0; cf. <cit.>. Note that if Pu|_Ω∖Ω_0∈ H^2(Ω∖Ω_0) the linear functional γ_1^- Pu on H^1/2(Γ_0) coincides with the strong conormal derivative ∂_ν Pu|_Γ_0 = ∇ Pu|_Ω∖Ω_0·ν∈ L^2(Γ_0), where ν is the outward unit normal on Γ_0 (with respect to Ω∖Ω_0).For u∈ H^1(Ω_0) such that Δ u∈ L^2(Ω_0) we set Pu:=Pũ, where ũ is any extension of u in H^1_0(Ω) (for the existence of such an extension see e.g. <cit.>), and let γ_1^+u ∈ H^1/2(Γ_0) be the conormal derivative of u on Γ_0 (with respect to Ω_0). Let Ȟ be the positive self-adjoint operator associated with ℰ̌. ThenȞ = {u∈ H^1(Ω_0): Δ u∈ L^2(Ω_0), γ_1^+ u + γ_1^- Pu=0},Ȟu = -Δ u.Indeed, note that for f∈ L^2(Ω_0) we have u∈Ȟ and Ȟu=f if and only ifℰ̌(u,v) = (f,v)_L^2(Ω_0)for allv∈ H^1(Ω_0).Let u∈Ȟ. By taking v∈ C_c^∞(Ω_0) we obtain Ȟu=-Δ u. Green's formula yields∫_Ω_0∇ u ·∇ v = (-Δ u,v)_L^2(Ω_0) + ⟨γ_1^+u,γ_0 v⟩_H^-1/2(Γ_0),H^1/2(Γ_0).Thus, γ_1^+ u+γ_1^- Pu=0 in H^-1/2(Γ_0).Conversely, if u∈ H^1(Ω_0) such that Δ u ∈ L^2(Ω_0) and γ_1^+ u+γ_1^- Pu=0 in H^-1/2(Γ_0), then for all v∈ H^1(Ω_0) we obtain∫_Ω_0∇ u·∇ v =(-Δ u,v)_L^2(Ω_0) + ⟨γ_1^+u,γ_0 v⟩_H^-1/2(Γ_0),H^1/2(Γ_0) = (-Δ u,v)_L^2(Ω_0) - ⟨γ_1^- Pu,γ_0 v⟩_H^-1/2(Γ_0),H^1/2(Γ_0),and therefore(u,v) = ∫_Ω_0∇ u·∇ v + ⟨γ_1^- Pu,γ_0 v⟩_H^-1/2(Γ_0),H^1/2(Γ_0) = (-Δ u,v)_L^2(Ω_0).Thus, u∈Ȟ and Ȟu=-Δ u.Since the boundary ofis of class C^1 and Ω_0 is bounded, by Rellich-Kondrachov Theorem the embedding (,_1^1/2)→ L^2(Ω_0) is compact. By Theorem <ref> we obtain that Ȟ has compact resolvent. Next, we revisit the 1/2-Laplacian, see <cit.>.Let d∈ and ^d+1_+:=^d× (0,∞). Let :=L^2(^d+1_+), :=L^2(^d), and defineinby:=H^1(^d+1_+),[u]:=∫_^d+1_+|∇ u(x,t)|^2 dx dt.Let J→ be defined by Ju:=γ_0 u, where γ_0 is the trace of u on the boundary of ^d+1_+. Then J is bounded on (,_1^1/2) and J= H^1/2(^d) is dense in . Let ψ∈ J, λ>0. Let u∈ such that Ju=ψ. Then P_λ u is the unique element in H^1(^d+1_+) which solves the boundary value problem-Δ P_λ u+λ P_λ u = 0in ^d+1_+, P_λ u = ψ on ^d.Thus, by Fourier transform with respect to the variable x we obtain an ordinary differential equation|ξ|^2 P_λ u(ξ,t) - ∂^2 P_λ u/∂^2 t(ξ,t) +λP_λ u(ξ,t) = 0 for(ξ,t)∈^d× (0,∞),P_λ u(ξ,0) = ψ̂(ξ)for ξ∈^d.The solution is given byP_λ u(ξ,t)= e^-√(|ξ|^2 +λ) tψ̂(ξ).Hence,_λ[ψ] = _λ[P_λ u] = ∫_0^∞∫_^d |∇ P_λ u(x,t)|^2 dx dt + λ∫_0^∞∫_^d |P_λ u(x,t)|^2 dx dt= ∫_0^∞∫_^d |ξ|^2|P_λ u(ξ,t)|^2 dξ dt + ∫_0^∞∫_^d|∂P_λ u/∂ t(ξ,t)|^2 dξ dt+ λ∫_0^∞∫_^d |P_λ u(ξ,t)|^2 dξ dt.Using Fubini's Theorem and an integration by parts for the second integral in the latter identity we thus obtain_λ[ψ] = ∫_^d√(|ξ|^2 +λ) |ψ̂(ξ)|^2 dξ→∫_^d|ξ||ψ̂(ξ)|^2 dξ.One can easily check that the limiting quadratic form is closed. Hence, from Theorem <ref> we observe that = lim_λ↓ 0_λ, which is nothing else but the closed positive form associated with (-Δ)^1/2 on ^d.Letbe the classical Dirichlet form in L^2(), i.e.:= H^1(),(u,v) := ∫ u' v' for all u,v∈ H^1().Let (a_n)_n∈ be a sequence in (0,∞) and μ:=∑_n∈a_nδ_n. By Sobolev's embedding theorem, every u∈ H^1() has a unique continuous representative ũ. We shall assume that every element in H^1() is continuous. We define the operator J fromto L^2(,μ) byJ:={u∈ H^1(): ∑_n∈ a_n |u(n)|^2 < ∞}, Ju:=u|_for all u∈ J.Then J is densely defined in (,_1^1/2) and the range of J is dense in L^2(,μ). Moreover, J is everywhere defined onand bounded on (,_1^1/2) if and only if (a_n) is bounded. We claim that the operator J is closed in (,_1^1/2). Indeed, let (u^k)_k be a sequence in J such that (u^k)_k converges to u in (,_1^1/2) and (Ju^k)_k converges to v in L^2(,μ). Then, by Sobolev's inequality, the sequence (u^k)_k converges locally uniformly (and therefore pointwise) to u. Thus, u=v μ-a.e., yielding u∈ J and Ju=v.For every λ>0 we obtain_λ=J = {ψ∈ L^2(,μ): ∑_n∈|ψ(n)|^2<∞}, _λ[ψ] = √(λ)/sinh√(λ)∑_n∈|ψ(n+1)-ψ(n)|^2 + 2√(λ)cosh√(λ) - 1/sinh√(λ)∑_n∈ |ψ(n)|^2.Indeed, let u∈ H^1(). By Sobolev's inequality, applied on the intervals (n-1/2,n+1/2), we obtain ∑_n∈|u(n)|^2<∞. Conversely, let ψ∈ L^2(,μ) such that ∑_n∈|ψ(n)|^2<∞. Choose φ∈ C_c^∞() such that φ(0) = 1 and φ(x) = 0 if |x|>1/2. Then (ψ(n)φ(·-n))_n∈ is an orthogonal system in H^1() and∑_n∈ψ(n)φ(·-n)^2_H^1() =∑_n∈|ψ(n)|^2φ^2_H^1()<∞.Thus u:=∑_n∈ψ(n)φ(·-n)∈ H^1(). Since u=ψ μ-a.e., we get ψ∈ J = _λ. Thus, _λ = {ψ∈ L^2(,μ): ∑_n∈|ψ(n)|^2<∞}. Obviously,( J_λ)^⊥__λ={u∈ H^1():-u”+λ u=0 in ∖}.Hence, for u∈ H^1(), we observe that P_λ u is the unique element in H^1() such that-(P_λ u)”+λ P_λ u = 0in ∖,P_λ u = uon .An elementary computation yieldsP_λ u = 1/sinh√(λ)(u(n+1)sinh(√(λ)(·-n))-u(n) sinh(√(λ)(·-n-1)))in[n,n+1].For every u∈ J we have_λ[Ju] = _λ[P_λ u] = ∑_n∈( ∫_n^n+1|(P_λ u)'(x)|^2 dx + λ∫_n^n+1|P_λ u(x)|^2 dx).Integrating by parts, we obtain_λ(Ju,Ju) = ∑_n∈ P_λ u (P_λ u)' |_n^n+1 = √(λ)/sinh√(λ)∑_n∈(- u(n+1)u(n) - u(n)u(n+1)) + 2√(λ)cosh√(λ)/sinh√(λ)∑_n∈ |u(n)|^2= √(λ)/sinh√(λ)∑_n∈ |u(n+1)-u(n)|^2 + 2√(λ)cosh√(λ) - 1/sinh√(λ)∑_n∈ |u(n)|^2.Letting λ↓ 0 we obtainlim_λ↓ 0_λ[Ju] = ∑_n∈|u(n+1)-u(n)|^2 .The latter form is closable. Let Q be the form defined by Q = {ψ∈ L^2(,μ): ∑_n∈|ψ(n+1) - ψ(n)|^2<∞}, Q[ψ] = ∑_n∈|ψ(n+1)-ψ(n)|^2.Thenis a closed restriction of Q. Moreover if ∑_n a_n=∞ then =Q. In fact, in the latter caseis the quadratic form associated with the (Neumann) graph Laplacian on the graphwith measure determined by the sequence (a_n), see e.g. <cit.>.Note that the sequence (a_n) appears inonly in an implicit way. In fact, it describes the measure of the space = L^2(,μ), where the trace form, the form associated with the graph Laplacian, is defined on. More examples concerning singular diffusion can be found in <cit.>. § TRACES OF DIRICHLET FORMSIn this section let X be a locally compact separable metric space, m a positive Radon measure with full support X and μ a positive Radon measure on X. We set := L^2(X,m) and :=L^2(X,μ) and assume thatis a regular Dirichlet form in L^2(X,m) with domain . Furthermore, let us assume that μ does not charge any sets of zero capacity.It is well-known (see <cit.>) that every element from the domain of a regular Dirichlet form possesses a quasi-continuous representative. Moreover, two quasi-continuous representatives which coincide m-a.e. coincide quasi-everywhere and hence μ-a.e. (see <cit.>). From now on we assume that all elements fromare quasi-continuous. Let J J:=∩ L^2(X,μ)→ L^2(X,μ), Ju:=u. Then J is well-defined.J is densely defined, has dense range, and J_1 is closed.Clearly, C_c(X)∩⊆ J. Sinceis regular, J dense in (,_1) and sinceis densely defined it is also dense in .Sinceis regular, C_c(X)∩ is dense in C_c(X) (with respect to the uniform norm), which itself is dense in L^2(X,μ). Hence, it is also dense in L^2(X,μ). Since it is a subspace of J, J has dense range.Let (u_n) in J, u∈ and v∈ L^2(X,μ) such that lim_n→∞_1[u_n-u]=0 and Ju_n→ v. By <cit.> there exists a subsequence (u_n_k) such that u_n_k→ u q.e. and hence also μ-a.e. Hence, v=u μ-a.e. and therefore u∈ J and Ju = u = v. Thus, we can construct the trace ofw.r.t. to J as in Theorem <ref>, which we still denote by . The trace formis a Dirichlet form.We first show that _λ is a Dirichlet form for every λ>0. We already know that _λ is densely defined and closed. Thus, to prove that it is in fact a Dirichlet form it remains to show that the unit contraction operates on _λ. Let u∈ J. Then (0∨ u)∧ 1∈∩ L^2(X,μ)= J and (0∨ Ju)∧ 1=J((0∨ u)∧ 1)∈ J=_λ. Furthermore, using the Dirichlet principle in Theorem <ref> together with the fact that _λ is a Dirichlet form, we obtain_λ[(0∨ Ju)∧ 1] = inf{_λ[v]:v∈ J,Jv= J((0∨ u)∧ 1)}≤inf{_λ[(0∨ v)∧ 1]:v∈ J,Jv= Ju}≤inf{_λ[v]:v∈ J,Jv= Ju} = _λ[Ju].Thus _λ is a Dirichlet form.Note thatis densely defined. According to <cit.>, proving thatis a Dirichlet form is equivalent to prove that the operator α(Ȟ+α)^-1 is Markovian for every α>0. Let ψ∈ L^2(X,μ) such that 0≤ψ≤ 1 μ-a.e. Owing to the fact that _λ is a Dirichlet form for every λ>0, for every α>0 we have0≤α(Ȟ_̌λ̌ + α)^-1ψ≤ 1 μ-a.e.Since α(Ȟ_̌λ̌ + α)^-1→α(Ȟ + α)^-1 strongly, also α(Ȟ + α)^-1 is Markovian. Let F be the topological support of the measure μ. If we consideras a Dirichlet form in L^2(F,μ) we can get more information on it.The Dirichlet formconsidered in L^2(F,μ) is regular.We first show that _λ is regular for every λ>0. Let λ>0, ψ∈ C_c(F). By Tietze's extension theorem, the function ψ has an extension ψ̃∈ C_c(X). Sinceis regular, by <cit.> there is a sequence (u_k) in C_c(X)∩ such that (u_k)⊆(ψ̃) for all k∈ and u_k-ψ̃_∞→ 0. Hence, (Ju_k) in C_c(F)∩ J and Ju_k →ψ uniformly on F. Now let ψ∈ J. Then there exists u∈ L^2(X,μ)∩ such that ψ=Ju. The regularity ofand the fact that μ is a Radon measure yield the regularity of the Dirichlet form ^J on L^2(X,m) defined by^J := L^2(X,μ)∩,^J[u] := [u] + ∫_X u^2 dμfor all u∈^J,see <cit.>. Thus, there exists a sequence (u_k) in C_c(X)∩ such that _λ^J[u_k-u]→ 0. Therefore, (Ju_k) in C_c(F)∩ J and Ju_k→ Ju in L^2(F,μ). By construction of _λ we obtain_λ[Ju_k - ψ] = _λ[Ju_k - Ju] = _λ[P_λ u_k - P_λ u] ≤_λ[u_k - u] ≤^J_λ [u_k-u] → 0.Hence, _λ is regular.Let us now prove the regularity of . As J⊆, by the first part of the proof we get that C_c(F)∩ is uniformly dense in C_c(F).Note that J is a core for . Thus, it suffices to prove that C_c(F) ∩ J is a core for J. Let ψ∈ J. Since _1 is regular, there exists a sequence (ψ_k)_k∈ in C_c(F)∩ J such that (_1)_1[ψ_k-ψ] → 0. Therefore,()_1[ψ_k-ψ] ≤(_1)_1[ψ_k-ψ] → 0.Hence,is regular. Next, we will establish a formula for Ȟ_1^-1 in terms of the 1-potential. Assume that J_1 is bounded. Then for every ψ∈ L^2(X,μ), the signed measure ψμ has finite energy integral. Let U_1^μψ be the 1-potential of the signed measure ψμ. ThenȞ_1^-1ψ=J_1U_1^μψ.Let us first observe that for every fixed ψ∈ L^2(X,μ) the signed measure ψμ has finite energy integral, i.e. there exists c≥ 0 such that∫ |Jv ·ψ| dμ≤ c(_1[v])^1/2for allv∈.Thus, the 1-potential of ψμ is well-defined and is characterized as being the unique element fromsuch that_1(U_1^μψ,v) = ∫ Jv·ψ dμfor allv∈.Hence, making use of the construction of _1 together with the latter identity we obtain_1(J_1U_1^μψ,Jv) = _1(U_1^μψ,P_1v) = ∫ JP_1v·ψ dμ = ∫ Jv·ψ dμ = (ψ,Jv)_L^2(X,μ)for allv∈, ψ∈ L^2(X,μ).Thus J_1U_1^μψ∈ D(Ȟ_1) and Ȟ_1 J_1U_1^μψ = ψ.We end this section by showing that our construction of the trace of a Dirichlet form coincides with the construction in <cit.>. To this end, let_:= {u X→∪{±∞}:u measurable,|u|<∞ m-a.e.,∃ (u_n)in : [u_n-u_m]→ 0,u_n→ u m-a.e.}.Clearly, _ is a vector space containing , and by <cit.> we can extendto _ by[u]:=lim_n→∞[u_n]for u∈_, where (u_n) is a corresponding approximating sequence. By <cit.>, every element in _ admits a quasi-continuous representative, so without loss of generality we may assume that the elements of _ are quasi-continuous. Note thatis a positive quadratic form on _, but (_,) may not be a Hilbert space. However, ifis a scalar product on , then (_,) is a Hilbert space (this is the so-called transient case) and _ can be identified with the abstract completion ofw.r.t. . We can decompose _ into an -orthogonal sum_ = _,X∖F̃⊕ℋ_F̃:= {u∈_:u=0 q.e. on F̃}⊕{P_F̃ u:u∈_},where F̃ is a so-called quasi-support of μ and P_F̃ is given by a probabilistic expectationP_F̃ u = _(·)(u(X_σ_F̃))for u∈_; cf. <cit.> for details. In caseis a scalar product onwe obtain that P_F̃ is an orthogonal projection on _ w.r.t. .We define the form q in L^2(F,μ) byq := {φ∈ L^2(F,μ): ∃ u∈_: u=φ μ-a.e.},q[φ] := [P_F̃u],where u∈_ with u=φμ-a.e.By <cit.>, q is well-defined. Note that for φ∈ q we have φ = P_F̃ u μ-a.e. By <cit.>, q is a regular Dirichlet form, so in particular q is closed. Moreover, J(C_c(X)∩)⊆ J is a core for q. We have = q.By <cit.> we observe the -orthogonal decomposition_ = {u∈_:u=0 μ-a.e.}⊕{P_F̃u:u∈_}. Let λ>0. Let φ∈ J, u∈ J such that Ju=φ. Then_λ[φ] = _λ [P_λ u] = [P_λ u] + λ∫ (P_λ u)^2 dm.Since P_λ u = _(·)(e^-λσ_F̃ u(X_σ_F̃)) by <cit.>, we obtain P_λ u → P_F̃ u m-a.e. For u∈∩ C_c(X) we obtain∫ |P_λ u|^2dm ≤∫ P_F̃ u^2 dm ≤u_∞^2 m( u).Since P_λ u = u q.e. on F̃ by <cit.>, we have P_λ u = u = Ju μ-a.e. Hence, q[Ju] = [P_F̃ P_λ u] = [P_λ u], since P_F̃P_λ = P_λ by the tower property for conditional expectations. Therefore, for φ∈ J(∩ C_c(X)) and u∈∩ C_c(X) such that Ju=φ we obtain_λ[φ] = [P_λ u] + λ∫ (P_λ u)^2 dm = q[φ] + λ∫ (P_λ u)^2 dm → q[φ].Hence, [φ] = _0[φ] = q[φ]. Since J(∩ C_c(X)) is a core for q by <cit.> and it is a core forby (the proof of) Proposition <ref>, we obtain = q.§ CONVERGENCE OF TRACES OF DIRICHLET FORMSLet X be a locally compact separable metric space, m a positive Radon measure on X with full support X anda regular Dirichlet form having domain ⊆ L^2(X,m). Let μ be a positive Radon measure on X charging no set of zero capacity. We consider a sequence () of regular Dirichlet forms with = for all n∈, and a Dirichlet form ^∞ with domain ^∞ =.We make the following three assumptions. First, assume there exists a constant c>0 such thatc^-1[u]≤[u]≤ c[u] for allu∈, n∈∪{∞}. A.1Assumption (<ref>) implies in particular thatandinduce equivalent capacities. Hence we shall use deliberately the abbreviations “q.e.” and “q.c.” to mean with respect to any of these capacities. The second assumption that we will adopt isJ_1 (∩ L^2(X,μ),_1)→ L^2(X,μ), u↦ u is continuous. A.2Note that since J_1 is densely defined by Lemma <ref>, we can then extend J_1 to .For n∈∪{∞} we define as before(∩ L^2(X,μ),_1) → L^2(X,μ),u↦ u.By (<ref>) and (<ref>) alsois continuous and can be extended to . For the third assumption, for n∈∪{∞} let H^n be the positive self-adjoint operator associated with ^n and K^n:=(H^n+1)^-1. Then we assume that for all u∈ L^2(X,m) we haveJ_1^n K^n u → J_1^∞ K^∞ uinL^2(X,μ). A.3 For n∈∪{∞} and λ>0 we denote by ^n_λ the trace of the Dirichlet form _λ w.r.t. the measure μ.Let us recall the definition of Mosco convergence, see <cit.> or <cit.>. Let (_n) be a sequence of positive quadratic forms in a Hilbert space , _∞ a quadratic form in . We say that (_n) Mosco-converges to the form _∞ inprovided (M1) for all (u_n) in , u∈ such that u_n→ u weakly inwe have lim inf_n→∞_n[u_n] ≥_∞[u],(M2) for all u∈ there exists (u_n) insuch that u_n→ u inand lim sup_n→∞_n[u_n]≤_∞[u].Note that for this definition we extend the quadratic forms to the whole space by setting them +∞ for elements not in their domain.Assume (<ref>), (<ref>) and (<ref>). Let () be Mosco-convergent to ^∞. Then:(a) The sequence of trace forms (^n_λ) Mosco-converges to the corresponding trace form _λ^∞ for every λ>0. (b) For every sequence (λ_j) in (0,∞) such that λ_j↓ 0 there exists a sequence (n_j) inwith n_j→∞ such that (^n_j_λ_j) Mosco-converges to the trace form ^∞.First, note that J^∞_1 u =u = J_1 u for all u∈ and n∈.(a) We shall prove the statement for λ=1, the proof for general λ>0 is similar. For n∈∪{∞} we define the bounded form Q_n byQ_n := L^2(X,μ), Q_n[ψ] := _1[()^*ψ].Since ()^*ψ∈( ())^⊥_^n_1, from the very definition we obtainQ_n[ψ] = ^n_1[ ()^*ψ] = ∫_X ψ·()^*ψ dμ.Hence Q_n is the closed quadratic form associated to the positive self-adjoint bounded operator ()^*=(Ȟ^n_1)^-1, where Ȟ^n_1 is the operator associated with ^n_1. As Mosco-convergence for forms is equivalent to strong resolvent convergence for the associated operators (see <cit.>) and for bounded self-adjoint operators strong convergence and resolvent convergence are equivalent we are led to prove that (Q_n) Mosco-converges to Q_∞.To prove (M1), let (ψ_n) be a L^2(X,μ)-weakly convergent sequence withweak limit ψ∈ L^2(X,μ). W.l.o.g. we may assume that lim inf_n→∞ Q_n[ψ_n] = lim_n→∞ Q_n[ψ_n] (otherwise choose a suitable subsequence). First, note that (ψ_n) is bounded. By (<ref>) and (<ref>) we easily obtain that sup_n∈∪{∞}()^* = sup_n∈∪{∞}<∞. Thus,sup_n∈ Q_n[ψ_n] = sup_n∈_1[()^*ψ_n] ≤sup_n∈()^*^2 ψ_n^2 < ∞.In particular,sup_n∈_1[()^*ψ_n]<∞.For the rest of the proof we shall use Kuwae's method (see <cit.>) as follows: For n,m∈∪{∞} defineE_n:=(𝒟,(_1)^1/2),𝒞_n:=,Φ_m,n𝒞_m → E_n,Φ_m,nu:=(K^m)^-1u.Then Φ_m,mu = u for all u∈𝒞_m. Furthermore, as Mosco-convergence of forms is equivalent to strong resolvent convergence of the associated operators, for v∈𝒞_∞ =K^∞ and u∈ L^2(X,m) such that K^∞ u = v we get_1[Φ_∞,nv] = _1[ u]=∫_X uu dm→∫_X u K^∞ u dm = _1^∞[K^∞ u] = _1^∞[v].Hence, (E_n)_n∈ converges to E_∞ in the sense of Kuwae and assumption <cit.> is fulfilled.Let u∈ L^2(X,m). By (<ref>) we have u → J_1^∞ K^∞ u in L^2(X,μ). For n∈ define w_n:= K^∞ u∈ K^∞. Then clearly w_n→ K^∞ u in E_∞, andlim_k→∞lim sup_n→∞_1[Φ_∞,n w_k -u] = lim_k→∞lim sup_n→∞_1[ u - u] = 0.Hence, u → K^∞ u strongly in the sense of Kuwae (see <cit.>).By (<ref>), an application of <cit.> yields that there exists a subsequence ((J^n_k_1)^*ψ_n_k)_k and u_∞∈ E_∞ such that for all u∈ L^2(X,m) we havelim_k→∞^n_k_1((J^n_k_1)^*ψ_n_k, K^n_k u) = _1^∞(u_∞,K^∞ u)(weak convergence in the sense of Kuwae <cit.>). Since (ψ_n) is weakly convergent to ψ and ( u) is strongly convergent to J_1^∞ K^∞ u we also obtain^n_k_1((J^n_k_1)^*ψ_n_k, K^n_k u) = ∫ψ_n_k J^n_k_1 K^n_k udμ→∫ψ J^∞_1 K^∞ u dμ = ^∞_1((J^∞_1)^* ψ, K^∞ u).Thus, _1^∞(u_∞,K^∞ u) = ^∞_1((J^∞_1)^* ψ, K^∞ u) for all u∈ L^2(X,m). Since K^∞ =H^∞ is a core for ^∞_1, we conclude u_∞ = (J^∞_1)^*ψ.From <cit.> we then getlim inf_n→∞Q_n[ψ_n] = lim_n→∞Q_n[ψ_n] = lim inf_k→∞Q_n_k[ψ_n_k] = lim inf_k→∞^n_k_1[(J^n_k_1)^*ψ_n_k] ≥_1^∞[(J^∞_1)^*ψ] = Q_∞[ψ],and (M1) is proved.To prove (M2), let ψ∈ L^2(X,μ). We will use ψ_n:=ψ for all n∈. Without loss of generality, we may assume that lim sup_n→∞ Q_n[ψ] = lim_n→∞ Q_n[ψ] (otherwise choose a suitable subsequence). By (<ref>) and (<ref>) the sequence ((J^n_1)^*ψ)_n is bounded with respect to ^∞_1. By choosing a suitable subsequence, we may assume that ((J^n_1)^*ψ)_n converges weakly to some u_∞∈ with respect to ^∞_1. Thus,∫_X u (J^n_1)^*ψ dm = ^∞_1(K^∞ u,(J^n_1)^*ψ) →^∞_1(K^∞ u,u_∞) = ∫_X u u_∞ dmfor all u∈ L^2(X,m). By (<ref>) and reasoning as in the proof of (M1) the sequence ((J^n_1)^*ψ)_n has a subsequence ((J^n_k_1)^*ψ)_k which converges weakly in the sense of Kuwae to some u_∞'∈ E_∞. In particular,∫_X u (J^n_k_1)^* ψ dm = ^n_k_1(K^n_k u, (J^n_k_1)^* ψ) →^∞_1(K^∞ u, u_∞') = ∫_X u u_∞' dmfor all u∈ L^2(X,m). Thus, u_∞=u_∞'. Since also∫_X u (J^n_k_1)^* ψ dm = ^n_k_1(K^n_k u, (J^n_k_1)^* ψ) →^∞_1(K^∞ u, (J^∞_1)^* ψ) = ∫_X u (J^∞_1)^* ψ dmfor all u∈ L^2(X,m) as in the proof of (M1), we obtain u_∞ = (J^∞_1)^* ψ. Since J^∞_1 is linear and continuous, it is also weakly continuous. Hence, J^n_k_1 (J^n_k_1)^*ψ = J^∞_1 (J^n_k_1)^*ψ→ J^∞_1 (J^∞_1)^*ψ weakly in L^2(X,μ). Thus,lim sup_n→∞ Q_n[ψ] = lim_n→∞ Q_n[ψ] = lim_k→∞ Q_n_k[ψ] = lim_k→∞^n_k_1[(J^n_k_1)^*ψ]= lim_k→∞∫ψ J^n_k_1 (J^n_k_1)^*ψ dμ = ∫ψ J^∞_1 (J^∞_1)^*ψ dμ = ^∞_1[(J^∞_1)^*ψ] = Q_∞[ψ]. (b) According to <cit.>, the topology of Mosco-convergence on the space of closed forms on a Hilbert space is metrizable. Thus (b) is simply a consequence of a diagonal procedure. (a) For (<ref>) it suffices to require that there exists c>0 such thatc^-1[u]≤[u]≤ c[u] for allu∈, n∈.Then Mosco-convergence of () to a Dirichlet form ^∞ yields (<ref>).(b) Note that in Theorem <ref>, compared to <cit.>, we just require Mosco convergence of (^n) to ^∞. The following lemma can be used to obtain (<ref>). Assume (<ref>) and (<ref>). Let () be Mosco-convergent to ^∞ and assume that [v]→^∞[v] for all v∈ K^∞. Then (<ref>) is satisfied.We make use of the notation introduced in the proof of Theorem <ref>. Let u∈ L^2(X,m).Since u → K^∞ u in L^2(X,m) (cf. Remark <ref>(b)), as in (<ref>) we obtain_1[ u - K^∞ u] = _1[ u] - 2_1( u, K^∞ u) + _1[K^∞ u] = _1[ u] - 2 ∫_X u K^∞ u dm + _1[K^∞ u]→^∞_1[K^∞ u] - 2 ∫_X u K^∞ u dm + ^∞_1[K^∞ u] = ^∞_1[K^∞ u] - 2^∞_1[K^∞ u] + ^∞_1[K^∞ u] = 0.By (<ref>) we conclude^∞_1[ u - K^∞ u] → 0.Since J^∞_1 is continuous, we have J_1^nK^n u = J_1^∞ K^n u → J_1^∞ K^∞ u in L^2(X,μ). The following counter-example shows that if (<ref>) fails (whereas (<ref>) still hold true) then the conclusions of Theorem <ref> may fail! Let X:=[0,1], m the Lebesgue measure on X, :=L^2(X,m) = L^2(0,1),the classical Dirichlet form with Neumann boundary conditions, i.e.:= H^1(0,1),[u] := ∫_0^1 u'(x)^2 dx,and μ := δ_0+δ_1. For n∈ defineinby[u] := 1/n∫_0^1 u'(x)^2 dx + u(0)^2 + u(1)^2 for allu∈.We shall identify the space L^2(X,μ) with the Euclidean space ^2. Clearly,is a regular Dirichlet form. By Sobolev embedding, elements fromhave continuous representatives. Moreoverwe see that J is densely defined with dense range, J_1 is bounded and the 's are closed. This is indeed all we need. Furthermore, for every n∈ and u∈ we have[(0∨ u)∧ 1]≤[u].Hence the 's are Dirichlet forms. However, assumption (<ref>) is not fulfilled in this particular case. Indeed, for u∈ H^1_0(0,1)⊆ and n∈ we observe [u] = 1/n[u].By <cit.> the sequence () converges in the sense of Mosco to the closure of the regular part of the quadratic form q defined byq:=,q[u] := u(0)^2 + u(1)^2.However, it is well-known that q = q_ sing (compare <cit.>; but it is also easy to see) and hence → 0 in the sense of Mosco. Obviously, in this situation we have ^∞ =0. Furthermore, for every λ>0, the ^∞_λ-orthogonal complement of J is {0}. Hence, _λ^∞=0 for all λ≥ 0.We shall show that ^n_λ↛0 in the sense of Mosco, for any λ≥ 0. Let us first compute ^n_λ for λ≥ 0. To this end, for given a,b∈, we solve the boundary value problem-1/nu” + λ u =0 in(0,1),u(0) =a, u(1) = b.For λ>0, the solution is given byu_a,b(x) = au_1(x) + bu_2(x)for allx∈ [0,1],whereu_1(x) := sinh (√(nλ)(1-x))/sinh√(nλ),u_2(x) := sinh (√(nλ)x)/sinh√(nλ)for allx∈[0,1].From the definition of ^n_λ we get^n_λ[(a,b)] = _λ[u_a,b] = 1/nu_a,b(1)u_a,b'(1)-1/nu_a,b(0)u_a,b'(0) + a^2 + b^2= -√(nλ)/sinh(√(nλ))2ab/n + (a^2+b^2)√(nλ)/ncosh(√(nλ))/sinh(√(nλ)) + a^2 + b^2.Moreover, for λ=0 an elementary computation yields^n[(a,b)] = 1/n (b-a)^2 + a^2 + b^2 for alla,b∈.Therefore, for all λ≥ 0 we obtainlim_n→∞^n_λ[(a,b)]=a^2+b^2.Since the limit form is bounded on ^2, by <cit.> we conclude that for each λ≥ 0 the sequence (^n_λ) converges in the sense of Mosco to the Euclidean scalar product on ^2. § ACKNOWLEDGEMENT We thank the referee for many valuable comments which improved the manuscript. In particular, they led to clarifications yielding Theorem <ref> and a strengthening of Lemma <ref>.
http://arxiv.org/abs/1706.08314v3
{ "authors": [ "Hichem BelHadjAli", "Ali BenAmor", "Christian Seifert", "Amina Thabet" ], "categories": [ "math.FA" ], "primary_category": "math.FA", "published": "20170626104541", "title": "On the construction and convergence of traces of forms" }
^1Center for Spinelectronic Materials and Devices, Department of Physics, Bielefeld University, Universitätsstraße 25, 33615 Bielefeld, Germany^2Institut für Physik, Greifswald University, Felix-Hausdorff-Strasse 6, 17489 Greifswald, Germany^3I. Physikalisches Institut, Justus Liebig University Giessen, Heinrich-Buff-Ring 16, 35392 Giessen, Germany^4Institute for Metallic Materials, IFW Dresden, Helmholtzstraße 20, 01069 Dresden, Germany^5Physics of Nanodevices, Zernike Institute for Advanced Materials, University of Groningen, Nijenborgh 4, 9747 AG Groningen, The NetherlandsWe investigate the influence of the barrier thickness of Co_40Fe_40B_20 based magnetic tunnel junctions on the laser-induced tunnel magneto-Seebeck effect.Varying the barrier thickness from 1 nm to 3 nm, we find a distinct maximum in the tunnel magneto-Seebeck effect for 2.6 nm barrier thickness. This maximum is independently measured for two barrier materials, namely MgAl_2O_4 and MgO. Additionally, samples with an MgAl_2O_4 barrier exhibit a high thermovoltage of more than 350 μV in comparison to 90 μV for the MTJs with MgO barrier when heated with the maximum laser power of 150 mW. Our results allow for the fabrication of improved stacks when dealing with temperature differences across magnetic tunnel junctions for future applications in spin caloritronics, the emerging research field that combines spintronics and themoelectrics.Enhancement of thermovoltage and tunnel magneto-Seebeck effect in CoFeB based magnetic tunnel junctions by variation of the MgAl_2O_4 and MgO barrier thickness Torsten Huebner,^1 Ulrike Martens,^2 Jakob Walowski,^2 Alexander Boehnke,^1 Jan Krieft,^1 Christian Heiliger,^3 Andy Thomas,^4Günter Reiss,^1 Timo Kuschel,^1,5 Markus Münzenberg^2 Electronic mail: [email protected] December 30, 2023 ===========================================================================================================================================================================================================================================§ INTRODUCTION In recent years, the combination of the spintronic magnetic tunnel junction (MTJ) and a temperature gradient was intensively studied. Since these experiments combine spin, charge and heat driven currents, they are prominent examples for the hot topic of spin caloritronics <cit.>. At first, the tunnel magneto-Seebeck (TMS) effect was predicted <cit.> and measured with two different techniques <cit.>. Later on, the experimentally even more challenging tunnel magneto-Peltier effect, which is reciprocal to the TMS effect, was observed as well <cit.>. Subsequent studies focused on the increase of effect sizes, film quality and the overcoming of experimental challenges. In particular, a giant TMS ratio of -3000 % was found when applying an additional bias voltage across the MTJ <cit.>, a significant improvement of the TMS ratio was obtained with the usage of half-metallic electrodes from ferromagnetic Heusler compounds such as Co_2FeAl or Co_2FeSi <cit.> and parasitic effects originating from semiconducting substrates were clarified <cit.>. Additionally, in a preceding publication <cit.>, we compared the laser-induced TMS with the method of the intrinsic TMS, which uses a symmetry analysis of the tunneling current with respect to the applied voltage. The Brinkman model <cit.> offered an alternative way to explain the symmetric contribution previously associated with the intrinsic TMS. Thus, we concluded that it is not possible to explicitly observe an intrinsic TMS. Up to now, theoretical works focused only on 6 or 10 atomic layers of barrier thickness, respectively, and on the electrode/barrier interface, which hugely influences not only the TMR but also the TMS effect <cit.>. Fe-Co/MgO is often used as a model system within these studies due to the large computational effort that is necessary, e.g., to model the TMS for materials with a more complex crystal structure. Furthermore, Fe-Co/MgO exhibits coherent tunneling of the electrons via Δ_1 states and, thus, ensures high TMR ratios needed for applications. A combination of an additionally applied temperature gradient and the continuing improvement of Seebeck voltages and TMS ratios may support the development of 'milivolt switches' <cit.> based, for example, on the thermal spin-transfer torque <cit.>. Previous TMS measurements concentrated on the established MTJ system of Co-Fe(CoFeB)/MgO with a standard barrier thickness of around 2 nm. Therefore, we investigate the system of CoFeB and MgAl_2O_4 (MAO) with different barrier thicknesses and junction sizes in order to maximize the TMS effect. Theoretically, MAO exhibits an advantageous lattice mismatch (1 %) with standard ferromagnetic electrodes such as Fe, CoFe or CoFeB when compared to MgO ((3-5) %) <cit.>. As a barrier, MAO also enables coherent tunneling via the Δ_1 symmetry filter effect <cit.>. So far, experimental results of the TMR effect in MAO MTJs fall short in comparison to MgO MTJs <cit.>, but, for example, magnetization switching by spin-transfer torque has been demonstrated <cit.>. Additionally, by growing MAO barriers via molecular beam epitaxy, MgAl_2O_x double-barrier MTJs exhibit almost no lattice mismatch between electrode and barrier showing pronounced resonant tunneling features in quantum well structures <cit.>. As a direct comparison with recent experiments and theoretical predictions, we compare our results for MAO barriers with CoFeB/MgO MTJs.This paper is organized as follows: Sec. <ref> starts with the sample deposition and preparation, followed by Sec. <ref>, which is split into three subsections. Here, Sec. <ref> deals with the results of the TMR and TMS measurements, Sec. <ref> with the results of the I/V curves and Sec. <ref> with the thermovoltage and COMSOL evaluation. Section <ref> concludes this paper.§ SAMPLE DEPOSITION AND PREPARATION We prepared different sample series in order to give a detailed overview concerning reproducibility and comparability. The MAO and MgO MTJs are sputtered in a Leybold Vakuum GmbH CLAB 600 cluster tool at a base pressure of less than 5·10^-7 mbar. This system allows the deposition of several samples without exposing them to ambient conditions in between sputtering processes. The whole stack of all series is composed of a bottom contact of Ta 10/Ru 30/Ta 5/Ru 5, a tunnel junction of MnIr 10/Co_40Fe_40B_20 2.5/barrier/Co_40Fe_40B_20 2.5 and a top contact of Ta 5/Ru 30/Ta 5/Au 60 (numbers are nominal thicknesses in nm). The resulting sample series are summarized in Tab. <ref>.To achieve the exchange biasing of the ferromagnetic electrode by the MnIr, the stacks are post annealed at 350 ^∘C for one hour, followed by cooling in a magnetic field of 0.7 T. Electron beam lithography and ion beam etching is used to pattern elliptical junctions of 0.5π μm^2, 2π μm^2 and 6π μm^2 with the major axis being twice as large as the minor axis. Ta_2O_5 (120 nm) is used as insulating material between individual MTJs and Au bond pads serve as electrical contacts and heat absorbers. More details can be found in preceding publications <cit.>.§ RESULTS§.§ TMR and TMS results Figure <ref>(a) shows the resistance area (RA) products of both series in dependence of the nominal barrier thicknesses in the parallel magnetization alignment. Please note that the error bars of the RA product are too small to be seen, indicating an overall homogenous sample quality of all series. As expected, the RA product increases exponentially with increasing nominal barrier thickness. In addition, the RA products of the independently prepared samples within series I fit very well together, indicating that the nominal is close to the real barrier thickness. Since the RA product is mostly governed by the barrier, it is noteworthy that the different barrier materials lead to comparable RA values between the series. Two minor loops of the samples with the highest TMS ratios are shown in Fig. <ref>(b,c). Here, the nominal barrier thickness is 2.6 nm and the junction size is 0.5πμm^2 in both cases. Despite the high resistance resulting from the thick barrier of 2.6 nm, both MTJs show clear parallel and antiparallel states with the same switching behavior for TMS and TMR measurements. The extracted TMS (TMR) ratio amounts to 8 % (18 %) for MAO whereas it is 28 % (130 %) for MgO. The sample with MAO barrier shows a very high thermovoltage of around 375 μV in contrast to around 80 μV in case of an MgO barrier when using a laser power of 150 mW.Figure <ref>(a) summarizes the results of the TMR measurements of series I and II in dependence of the RA product. For each barrier thickness several elements as well as different element areas are measured and averaged. Firstly, both barrier materials show TMR maximum values (MAO: 30 %, MgO: 150 %) around a nominal barrier thickness of 2 nm (RA_MAO≈100 kΩμm^2, RA_MgO≈1000 kΩμm^2). Secondly, the series with MgO barrier exhibits a second peak of the TMR for a barrier thickness of 1.9 nm (RA=10 kΩμm^2). This peak might be directly related to the slightly increased RA product (c.f. Fig. <ref>(a)) in this region.The dependence of the TMS ratio on the barrier thickness of both series is shown in Fig. <ref>(b). Thin barriers of MAO exhibit a gradual increase of TMS ratios from 3 % to 4 %, while a distinct maximum is observed for a nominal barrier thickness of 2.6 nm. Here, the TMS ratio doubles to 8 %. Furthermore, the TMS ratio of the MTJs with MgO barrier shows a similar behavior. It rises from 14 % to 19 % in case of thin barriers and shoots up to almost 28 % for a nominal barrier thickness of 2.6 nm. In between, a local maximum is observable that directly corresponds to the position of the local TMR maximum. Usually, a direct correlation between TMR and TMS is not expected. For both barrier materials, the TMS peak is located around the same value of RA of some 10^3 kΩμm^2, which corresponds to a nominal barrier thickness of 2.6 nm. Also, the TMS ratios of the samples prepared separately correspond well to the results of the rest of series I. In contrast to our experiments, theoretical calculations predict an increasing TMS ratio when going down from 10 monolayers (2 %) to 6 monolayers (10 %) of MgO <cit.>(1 ML ≅ 2.1 Å). A reason for these opposite results might be a different interface structure of the electrode and the barrier, which is assumed to be perfectly ordered in the calculations. §.§ I/V measurements Figures <ref>(a,b) show the dJ/dV (recalculated from I/V measurements) curves that are measured at the same elements as in Figs. <ref>(b,c). While the curves look similar in case of an MTJ with MAO barrier, they look very different in case of the MTJ with MgO barrier. This difference is due to the coherent tunneling of MgO based MTJs: The parallel curve is almost linear, while the antiparallel curve exhibits a pronounced kink around a bias voltage of 0 V. Since the MAO MTJs exhibit a rather low TMR, no Δ_1 symmetry filter effect and, thus, no coherent tunneling is present in the MTJs with MAO barrier. In order to further analyze the MTJs with MAO barrier, we use the Brinkman model, which allows to calculate the barrier height φ, the barrier asymmetry Δφ and the barrier thickness d_B from I/V measurements. A theoretical description of this model can be found in Ref. brinkman, while experimental details are described in Ref. huebner. With this model, we are able to quantitatively compare the samples with different MAO barrier thicknesses. One drawback of the Brinkman model is its limitation to MTJ systems that do not show coherent tunneling. In addition, it is not able to explain features resulting from DOS related effects, such as half-metallic ferromagnetism. Thus, it is not possible to extract physically reasonable barrier parameters of the MTJs with MgO barrier, because of the coherent tunneling resulting from the Δ_1 symmetry filter effect.Figure <ref>(c) depicts the relative deviation of the calculated Brinkman barrier thickness (d_ B) from the nominal barrier thickness (d_N). With respect to the usual error range of the Brinkman model of 10 % (marked by the gray area), most of the elements are very close to the nominal barrier thickness. Except for the sample with an MAO barrier thickness of 1.4 nm, this deviation does not exceed 15 %. An additional requirement of the Brinkman model to be applicable is d_N > 1.0nm. Apparently, the nominal barrier thickness of 1.4 nm is too close to this limit, resulting in huge variations of the Brinkman barrier parameters. In general, the calculated Brinkman thicknesses depicted by the green squares in Fig. <ref>(c) are larger in the antiparallel (light) than in the parallel state (dark). For the barrier height and the barrier asymmetry in Figs. <ref>(d,e), we find a reversed behavior. Here, the parallel values (Fig. <ref>(d)) are generally larger than the antiparallel ones (Fig. <ref>(e)). Again, the results of the sample with a barrier thickness of 1.4 nm MAO show a huge variation, while the results of all other samples are very consistent, even between different junction sizes. Excluding the results of the sample with an MAO barrier of 1.4 nm, the barrier height decreases from 3 eV to 1.7 eV (1.9 eV to 1.5 eV) in the parallel (antiparallel) state. Additionally, the barrier asymmetry increases from -1 eV to 0.5 eV in the parallel state, while it increases from -0.1 eV to 0.4 eV in the antiparallel state. Overall, the calculated values of the samples that have been prepared independently from the rest of the series (1.8 nm and 2.0 nm) show almost no deviation from the general trend in case of the barrier height. However, the values of the barrier asymmetry are different for the independently prepared samples. A possible explanation for this difference might be the deposition process, which plays a vital role for the barrier asymmetry.§.§ Thermovoltages and Seebeck coefficients In order to investigate the high thermovoltages of the sample with MAO barrier, Fig. <ref>(a) depicts the thermovoltage in dependence of the MTJ area. Furthermore, the remaining thermovoltage after a dielectric breakdown of the junction is shown (see inset of Fig. <ref>(a)). Thus, it is possible to deduce the contribution of the intact tunneling barrier. The dielectric breakdown is confirmed via an additional TMR measurement after applying 3 V to the junction. During the breakdown, the resistance changes from the MΩ- to the Ω-range. After the breakdown, both the TMR and TMS do not show any effect of magnetization switching (see inset of Fig. <ref>a). Clearly, around 70 % of the absolute thermovoltage is caused by the intact tunneling barrier in case of small MTJ areas. This contribution decreases to 32 % for larger MTJ areas. Since the laser has a spot size of 11 μm^2 when focused onto the MTJ <cit.>, one possible explanation for the decrease is the occurrence of non-homogenous heating. Thus, additional lateral heat flows emerge, effectively lowering the temperature difference across the barrier and, ultimately, the measured thermovoltage as well. The regime of homogenous heating is indicated by the gray shaded area in Fig. <ref>(a), which represents the laser spot size. In future experiments, intermediate MTJ sizes could offer a more detailed insight into the processes of non-homogenous heating and in-plane temperature differences. With additional in-plane temperature differences, Nernst effects and additional magnetothermopower contributions become possible, which are not taken into account in current TMS experiments.Figure <ref>(b) sums up the absolute thermovoltages in dependence of the RA product. Here, a laser power of 150 mW is applied to MTJs with an area of 6π μm^2. In case of MAO, a drop of about 20 μV in the absolute thermovoltage is measured for barrier thicknesses of 1.8 nm and 2.0 nm, which correspond to the samples that were prepared separately from the rest of the series. Since the RA products, the barrier heights and the Brinkman barrier thicknesses of series I are in good agreement with each other, the only difference is the barrier asymmetry. All other MTJs with MAO barriers show a thermovoltage that is consistently larger by a factor of up to 2 in comparison to the MTJs with MgO barrier.In general, an increasing barrier thickness results in an increased temperature difference and, ultimately, in an increased measured thermovoltage. Thus, it is most likely that the contribution of the remaining stack to the absolute thermovoltage is different for the MTJs with MAO barrier thicknesses of 1.8 nm and 2.0 nm, for example via different lead contributions. Excluding the two samples with MAO barriers, the difference between series I and series II is explainable by the different thermal conductivities of thin MAO and MgO films resulting in different temperature differences across the barrier and, thus, different thermovoltages.Bulk MAO has a thermal conductivity of 23 W/K·m <cit.>, while bulk MgO has a thermal conductivity of 48 W/K·m <cit.>.In Ref. mgo the thermal conductivity of thin MgO films is also experimentally determined to be 4 W/K·m. Taking the same reduction factor for thin MAO films, resulting in a thermal conductivity of 2.3 W/K·m, a COMSOL simulation offers insight into the actual temperature difference across the whole stack.Figure <ref>(a) displays the result of this simulation for the interesting range of thermal conductivity and two barrier thicknesses. Accordingly, the thin film regime is shown in Fig. <ref>(b). Since the area of the MTJs (1.6 μm^2) is smaller than the area of the focused laser beam (11 μm^2), the MTJs are heated homogeneously. The temperature differences become very large in comparison to values of preceding publications <cit.> (here, the laser spot area was usually around 240μm^2), since most of the laser beam energy is directly absorbed above the MTJ instead of a larger area of the Au bond pad.A systematic study of the influence of the laser spot size can be found in Ref. TMS. Of course, with the lack of actual measurements of the thermal conductivity of thin insulating films, COMSOL simulations offer only a limited insight into the actual thermal distribution inside an MTJ. Hence, there is an ongoing discussion about the actual thermal conductivity of thin insulating films <cit.>. With the simulated temperature differences, the Seebeck coefficients for the MAO and the MgO MTJ with the highest TMS ratios are calculated (via TMS = S_p-S_ap/min(|S_p|,|S_ap|)) to be S_p=-51μV/K and S_ap=-56μV/K for MAO and S_p=-12μV/K and S_ap=-15μV/K for MgO, which is in good agreement with previous results <cit.>.§ CONCLUSION We have studied the dependence of the laser-induced TMS effect on the barrier thickness of MAO and MgO MTJs and found a distinct maximum of the TMS ratio in case of thick barriers (nominal barrier thickness of 2.6 nm) for both materials. The TMS ratio increased from (3 to 4) % to 8 % for MTJs with MAO barrier, while the TMS ratio for MTJs with MgO barrier increased from around 15 % to 28 %. We found no experimental evidence of enhanced interface effects, which could explain the predicted increase of the TMS effect in case of thin barriers. The Brinkman model offered detailed insight into the barrier heights and asymmetries of the MTJs with MAO barrier. In addition, the extracted Brinkman barrier thicknesses provided a convenient way to compare samples with different nominal barrier thicknesses.Furthermore, we measured very large thermovoltages of more than 350 μV at the smallest MTJs of 0.5π μm^2 with an MAO barrier, in contrast to 80 μV for MTJs with a barrier of MgO. This difference is also reflected in the dependence of the thermovoltage on the barrier thickness. Here, MAO barriers show a thermovoltage that is larger by a factor of two in comparison to MgO barriers. Additionally, the MTJ with MAO barrier exhibits Seebeck coefficients that are thrice as large as for MTJs with MgO barrier (S_p,MAO=-59μV/K vs. S_p,MgO=-18μV/K) taking the reduced thermal conductivity of thin insulating films into account. Thus, we conclude that MAO is generally preferable as a barrier material when generating thermovoltages in MTJs. Still, further effort is needed to determine the real thermal conductivities of thin insulating films.§ ACKNOWLEDGMENTS The authors gratefully acknowledge financial support from the Deutsche Forschungsgemeinschaft (DFG) within the priority program Spin Caloric Transport (SPP 1538). 27 BauerG. E. Bauer, E. Saitoh, and B. J. van Wees, Nature Mater. 11, 391 (2012). czernerM. Czerner, M. Bachmann, and C. Heiliger, Phys. Rev. B 83, 132405 (2011). walterM. Walter, J. Walowski, V. Zbarsky, M. Münzenberg, M. Schäfers, D. Ebke, G. Reiss, A. Thomas, P. Peretzki, M. Seibt, J. S. Moodera, M. Czerner, M. Bachmann and C. Heiliger, Nature Mater. 10, 742 (2011). Liebing1N. Liebing, S. Serrano-Guisan, K. Rott, G. Reiss, J. Langer, B. Ocker, and H. W. Schumacher, Phys. Rev. Lett. 107, 177201 (2011). magnetopeltierJ. Shan, F. K. Dejene, J. C. Leutenantsmeyer, J. Flipse, M. Münzenberg, and B. J. van Wees, Phys. Rev. B 92, 020414(R) (2015). Boehnke2A. Boehnke, M. Milnikel, M. von der Ehe, C. Franz, V. Zbarsky, M. Czerner, K. Rott, A. Thomas, C. Heiliger, G. Reiss, and M. Münzenberg, Sci. Rep. 5, 8945 (2015). Boehnke3A. Boehnke, U. Martens, C. Sterwerf, A. Niesen, T. Huebner, M. von der  Ehe, M. Meinert, T. Kuschel, A. Thomas, C. Heiliger, M. Münzenberg, and G. Reiss, under review, (2017). Boehnke1A. Boehnke, M. Walter, N. Roschwesky, T. Eggebrecht, V. Drewello, K. Rott, M. Münzenberg, A. Thomas, and G. Reiss, Rev. Sci. Instrum. 84, 063905 (2013). huebnerT. Huebner, A. Boehnke, U. Martens, A. Thomas, J.-M. Schmalhorst, G. Reiss, M. Münzenberg, and T. Kuschel, Phys. Rev. B 93, 224433 (2016). brinkmanW. F. Brinkman, R. C. Dynes, and J. M. Rowell, J. Appl. Phys. 41, 1915 (1970). czerner2M. Czerner, and C. Heiliger, J. Appl. Phys. 111, 07C511 (2012). heiligerC. Heiliger, C. Franz, and M. Czerner, Phys. Rev. B 87, 224412 (2013). milivoltM. M. Waldrop, Nature 530, 7589 (2016). TSTT M. Hatami, G. Bauer, Q. Zhang, P. J. Kelly, Phys. Rev. Lett. 99, 066603 (2007). miuraY. Miura, S. Muramoto, K. Abe, and M. Shirai, Phys. Rev. B 86, 024426 (2012). meffJ. Zhang, X. G. Zhang, and X. F. Han, Appl. Phys. Lett. 100, 222401 (2012). Sukegawa1H. Sukegawa, H. Xiu, T. Ohkubo, T. Furubayashi, T. Niizeki, W. Wang, S. Kasai, S. Mitani, K. Inomata, and K. Hono, Appl. Phys. Lett. 96, 212505 (2010). tao2B. Tao, D. Li, H. Liu, H. Wei, J.-F. Feng, S. Wang, and X. Han, IEEE Trans. Magn. 50, 4401004 (2014). scheikeT. Scheike, H. Sukegawa, K. Inomata, T. Ohkubo, K. Hono, and S. Mitani, Appl. Phys. Express 9, 053004 (2016) Sukegawa2H. Sukegawa, S. Mitani, T. Ohkubo, K. Inomata, and K. Hono, Appl. Phys. Lett. 103, 142409 (2013). taoB. S. Tao, H. X. Yang, Y. L. Zuo, X. Devaux, G. Lengaigne, M. Hehn, D. Lacour, S. Andrieu, M. Chshiev, T. Hauet, F. Montaigne, S. Mangin, X. F. Han, and Y. Lu, Phys. Rev. Lett. 115, 157204 (2015). TMSU. Martens, J. Walowski, T. Schumann, M. Mansurova, A. Boehnke, T. Huebner, G. Reiss, A. Thomas, and M. Münzenberg, J. Phys. D: Appl. Phys. 50, 144003 (2017). mao2St. Burghartz and B. Schulz, J. Nucl. Mater. 212-215, 1065 (1994). mgoS.-M. Lee, D. G. Cahill, and T. H. Allen, Phys. Rev. B 52, 253 (1995). dropJ. Zhang, M. Bachman, M. Czerner, and C. Heiliger, Phys. Rev. Lett. 115, 037203 (2015). drop2T. Böhnert, R. Dutra, R. L. Sommer, E. Paz, S. Serrano-Guisan, R. Ferreira, and P. Freitas, Phys. Rev. B 95, 104441 (2017). drop3H. F. Yang, X. K. Hu, N. Liebing, T. Böhnert, J. D. Costa, M. Tarequzzaman, R. Ferreira, S. Sievers, M. Bieler, and H. W. Schumacher, Appl. Phys. Lett. 110, 232403 (2017).
http://arxiv.org/abs/1706.08287v1
{ "authors": [ "Torsten Huebner", "Ulrike Martens", "Jakob Walowski", "Alexander Boehnke", "Jan Krieft", "Christian Heiliger", "Andy Thomas", "Günter Reiss", "Timo Kuschel", "Markus Münzenberg" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170626085542", "title": "Enhancement of thermovoltage and tunnel magneto-Seebeck effect in CoFeB based magnetic tunnel junctions by variation of the MgAl$_2$O$_4$ and MgO barrier thickness" }
Dipolar and charged localized excitons in carbon nanotubes Jan T. Glückert^1, Lyudmyla Adamska^2, Wolfgang Schinner^1, Matthias S. Hofmann^1, Stephen K. Doorn^3, Sergei Tretiak^2, and Alexander Högele^1 December 30, 2023 =================================================================================================================================================== Abstract:For a general class of large non-Hermitian random block matrices weprove thatthere are no eigenvalues away from adeterministic setwith very high probability. This set isobtained from the Dyson equation of the Hermitization ofas the self-consistent approximation of the pseudospectrum. Wedemonstratethat the analysis of thematrix Dyson equation from <cit.> offers a unified treatment of many structured matrix ensembles.Keywords: Outliers, block matrices, local law, non-Hermitian random matrix, self-consistent pseudospectrumAMS Subject Classification: 60B20, 15B52§ INTRODUCTION Large random matrices tend to exhibit deterministic patterns due to the cumulative effects ofmany independent random degrees of freedom.The Wigner semicircle law<cit.>describes the deterministic limit of the empirical density of eigenvalues of Wigner matrices, i.e., Hermitian random matrices with i.i.d.entries (modulo the Hermitian symmetry).For non-Hermitian matrices with i.i.d. entries, the limitingdensity is Girko's circular law,i.e., theuniform distribution in a disc centered around zero in the complex plane, see<cit.> for a review. For more complicated ensembles, no simple formula exists forthe limiting behavior, but second order perturbation theory predicts that it may be obtained from the solution to a nonlinear equation, called the Dyson equation. While simplified forms of the Dyson equationare present in practically every work on random matrices, its full scope has only recently been analyzed systematically, see <cit.>.In fact, theproper Dyson equation describes not only the density of states but the entire resolvent of the random matrix. Treating it as a genuine matrix equation unifies many previousworks that were specific to certain structures imposed on the random matrix.These additional structures often masked a fundamentalproperty of the Dyson equation, its stability against small perturbations, that plays a key role in proving the expectedlimit theorems, also called global laws.Girko's monograph <cit.> is the most systematic collection of many possible ensembles, yet itanalyzes them on a case by case basis.In this paper, using the setup of the matrix Dyson equation (MDE) from <cit.>, we demonstrate a unified treatmentfora large classof random matrix ensembles that contain or generalize many of Girko's models. For brevity, we focus only on two basic problems: (i) obtainingtheglobal law and (ii) locating the spectrum.The global law, typically formulated as a weak convergence of linear statistics of the eigenvalues, describes only the overwhelming majority of the eigenvalues. Even local versions of this limit theorem,commonly called local laws (see e.g.<cit.> and references therein) are typically not sensitive to individual eigenvalues andthey do not exclude that a few eigenvalues are located far away from the support of the density of states.Extreme eigenvalues have nevertheless been controlledin some simplecases. In particular,for the i.i.d. cases, it is known that with a very high probability all eigenvalues lie in an ε-neighborhood of the support of the density of states. These results can be proven with the moment method,see <cit.>for the Hermitian (Wigner) case, and<cit.> for the non-Hermitian i.i.d. case; see also <cit.> forthe optimal moment condition. More generally, norms of polynomials inlarge independent random matrices can be computed via free probability; for GUE or GOE Gaussian matrices it was achieved in<cit.>and generalized to polynomials of general Wigner and Wishart type matrices in<cit.>.These results havebeenextended recently to polynomialsthatinclude deterministic matrices with the goal of studying outliers, see<cit.> and references therein. All these works concernHermitian matrices either directly or indirectly byconsideringquantities, such as norms of non-Hermitian polynomials, that can be deduced from related Hermitian problems. For general Hermitian random matrices, the density of states may be supported on several intervals.In this situation, excluding eigenvalues outside of the convex hull of this support is typically easierthan excluding possible eigenvalues lying inside the gaps of the support. This latter problem, however, is especially important for studying the spectrum of non-Hermitian random matrices , since the eigenvalues ofaround a complex parameter ζ can be understood by studyingthe spectrum of the Hermitized matrix ^ζ = [ 0 -ζ^*-ζ̅ 0 ]around 0.Note that for ζ∈ away from the spectrum of , zerowill typically fall inside a gap ofthe spectrum of ^ζ by its symmetry.In this paper, we consider a very general class of structured block matrices that we call Kronecker random matrices since their structure is reminiscent to the Kronecker product of matrices.They haveL× L large blocks and each blockconsists ofa linear combination ofrandom N× N matrices with centered, independent, not necessarily identically distributed entries;see (<ref>) later for the precise definition.We will keep L fixed and let N tend to infinity. The matrix X has a correlation structure that stems from allowing the same N × N matrix to appear in different blocks. This introduces an arbitrary linear dependence among the blocks, while keeping independence inside the blocks.The dependence is thus described by L× L deterministic structure matrices.Kronecker random ensemblesoccur in many real-world applications of random matrix theory, especially inevolution of ecosystems<cit.> andneural networks <cit.>. These evolutions are described by a large system of ODE's with random coefficients and the spectral radius of the coefficient matrixdetermines the long time stability, see<cit.> for the original idea. More recent results are found in<cit.> and references therein.The ensemble we study hereis even more general as it allows forlinear dependence among the blocks described by arbitrary structure matrices. This level of generality is essential for anotherapplication; to study spectral properties of polynomials of random matrices. These are often studied via the “linearization trick” and the linearized matrix is exactly a Kronecker random matrix.This application is presentedin <cit.>, where theresults of the current paper are directly used.We present general results thatexclude eigenvalues of Kronecker random matrices away from adeterministic set with a very high probability.The setis determined by solving the self-consistent Dyson equation. In the Hermitian case,is the self-consistent spectrum defined as thesupport of the self-consistentdensity of states ρ which is defined as the imaginary part of the solutionto the Dyson equation when restricted to the real line.We also address the generalnon-Hermitian setup,where the eigenvalues are not confined to the real line. In this case,the set =_ contains an additional cutoff parameter and it is the self-consistent -pseudospectrum, given viathe Dyson equation for the Hermitized problem ^ζ, see (<ref>) later. The →0 limit of the sets _is expected not only to contain but tocoincide with the support of the density of statesin the non-Hermitian case as well, but thishas been proven only in some special cases.We provide numerical examples to support this conjecture.We point out that the global law and the location of the spectrum for A+X, where X is ani.i.d. centered random matrixand A is ageneral deterministicmatrix (so-called deformed ensembles), have been extensivelystudied, see <cit.>. For more references, we refer to the review <cit.>. In contrast to these papers,the main focus of our work is to allow for general(not necessarily identical) distributions of the matrix elements.In this paper, we first study arbitraryHermitian Kronecker matrices ; the Hermitization ^ζof a general Kronecker matrix is itself a Kronecker matrix and thereforejust a special case. Our first result is the global law, i.e., we show that the empirical density of states of is asymptotically givenby the self-consistent density of states ρ determined by the Dyson equation. We then also prove an optimal local lawfor spectral parameters away from the instabilities of the Dyson equation. The Dyson equation for Kronecker matrices isa systemof 2N nonlinear equations for L× L matrices, see (<ref>) later. In case of identical distribution of the entries within each N× N matrix, the system reduces to a single equation for a 2L× 2L matrix – a computationally feasible problem. Thisanalysis provides not only the limitingdensity of states but also a full understanding of the resolvent for spectral parameters z very close to the real line, down to scales z≫ 1/N.Although the optimal local law down to scales z≫ 1/N cannot capture individual eigenvalues inside the support of ρ, the key point is that outside of this supporta stronger estimate in the local law may be proven thatactually detectsindividual eigenvalues, or rather lack thereof. This observationhas been used for simpler models before, in particular <cit.> already contained this stronger formof the local semicircle law for generalized Wigner matrices, see also <cit.> for Wigner-type matrices,<cit.> for Gram matrices and <cit.> for correlated matrices with a uniform lower bound on the variances.In particular, by running the stability analysis twice, this allows for an extension of the local law for any z>0outside of the support of ρ.Finally, applying the local law to the Hermitization ^ζ of a non-Hermitian Kronecker matrix , wetranslate local spectral information on ^ζ around 0 into information about the location of the spectrum of . This is possible since ζ∈ () if and only if 0∈ (^ζ).In practice, we give a good approximation to the -pseudospectrum ofby considering the set of those ζ values infor which 0 is at leastdistance away fromthe support of the self-consistent density of states for ^ζ.In the main part of thepaper,we give a short, self-contained proof that directly aims at locating theHermitianspectrum under the weakest conditionsfor the most general setup. We split the proof into two well-separated parts; a random and a deterministic one.In Section <ref> and <ref> as well as Appendix <ref> we givea model-independent probabilistic proof of the main technical result, the local law(Theorem <ref> and Lemma <ref>),assuming only two explicit conditions, boundedness and stability,on the solution of the Dyson equation that can be checked separately for concrete models. In Section <ref>we prove that these two conditionsare satisfied for Kronecker matrices away from the self-consistent spectrum.Thekey inputs behind the stability are (i) a matrix version of the Perron-Frobenius theorem and (ii)a sophisticated symmetrization procedure that is much more transparentin the matrix formulation.In particular, the global law is an immediate consequence of this approach. Moreover, the analysis revealsthat outside of the spectrum the stability holds without any lower bound on the variances,in contrast to local laws inside the bulk spectrum that typically require some non-degeneracy condition on the matrix of variances. We stress that only the first part involves randomness and we follow the Schur complement method andconcentration estimates for linear and quadratic functionals of independent random variables. Alternatively, we could have usedthe cumulant expansion method that is typicallybetter suited for ensembles with correlation <cit.>. We opted for the former path to demonstrate that correlations stemming from the block structure can still be handled with the more direct Schur complement method as long as the non-commutativity of the L× L structure matrices is properly taken into account.Utilizing a powerful tensor matrix structure generated by the correlations between blocksresolves this issue automatically.AcknowledgementThe authors are grateful toDavid Renfrewfor several discussions and for calling their attention to references on applications of non-Hermitian models.§.§ NotationOwing to the tensor product structure of Kronecker random matrices (see Definition <ref> below),we need to introduce different spaces of matrices. In order to make the notation more transparent to the reader,we collect the conventions used on these spaces in this subsection. For K, N ∈, we will consider the spaces ^K× K, (^K× K)^N and ^K× K⊗^N× N, i.e.,we consider K× K matrices, N-vectors of K× K matrices and N× N matrices with K × K matrices as entries.For brevity, we denote ^K× K⊗^N× N.Elements of ^K× K are usually denoted by small roman letters, elements of (^K× K)^N by small boldface roman letters andelements ofby capitalized boldface roman letters.For α∈^K× K, we denote by α the matrix norm of α induced by the Euclidean distance on ^K. Moreover, we define two different norms on the N-vectors of K× K matrices. For any =(r_1, …,r_N) ∈ (^K× K)^N we definemax_i=1^N r_i, and ^2 1/NK∑_i=1^N (r_i^*r_i). Theseare the analogues of the maximum norm and the Euclidean norm for vectors in ^N which corresponds to K=1.Note that ≤. For any function f U →^K× K from U ⊂^K× K to ^K× K, we lift f to U^N by defining f() ∈ (^K× K)^N entrywise for any =(r_1, …, r_N) ∈ U^N ⊂ (^K× K)^N, i.e.,f()(f(r_1), …, f(r_N)).We will in particular apply this definition for f being the matrix inversion map and the imaginary part. Moreover, for =(x_1, …, x_N), = (y_1, …, y_N) ∈ (^K× K)^Nwe introduce their entrywise product ∈ (^K× K)^N through (x_1y_1, …, x_Ny_N) ∈ (^K× K)^N.Note that for K ≠ 1, in general, ≠. If a ∈^K× K or ∈ are positive semidefinite matrices, then we write a ≥ 0 or ≥ 0, respectively.Similarly, for ∈ (^K× K)^N, we write ≥ 0 to indicate that all components ofare positive semidefinite matrices in ^K× K.The identity matrix in ^K× K andis denoted by 𝕀. We also use two norms on . These are the operator norminduced by the Euclidean distance on ^KN≅^K ⊗^Nand the norm inducedby the scalar productondefined through 1/NK(^* ), √(),for , ∈. In particular, all orthogonality statements onare understood with respect to thisscalar product. Furthermore, we introduce𝕀, the normalized tracefor ∈.We also consider linear maps on (^K× K)^N and , respectively.We follow the convention that the symbols ,and 𝒯 label linear maps (^K× K)^N → (^K× K)^Nand ,ordenote linear maps →. The symbolrefers to the identity map on .For any linear map 𝒯 (^K× K)^N → (^K× K)^N, let 𝒯 denote the operator norm of 𝒯 induced by and let 𝒯 denote the operator norm induced by .Similarly, for a linear map →, we writefor the operator norm induced byonandfor its operator norm induced byon . We use the notation [n] {1, …, n} for n ∈. For i,j ∈ [N], we introducethe matrix E_ij∈^N × N which has a one at its (i,j) entry and only zeros otherwise, i.e., E_ij (δ_ikδ_jl )_k,l=1^N.For i,j ∈ [N], the linear map P_ij→^K × K is defined through P_ij = r_ij,for any = ∑_i,j=1^N r_ij⊗ E_ij∈ with r_ij∈^K× K.§ MAIN RESULTS Our main object of study are Kronecker random matrices which we define first. To that end, we recall the definition of E_ij from (<ref>).[Kronecker random matrix]A random matrix ∈^L× L⊗^N× N is called Kronecker random matrix if it is of the form= ∑_μ=1^ℓα_μ⊗ X_μ + ∑_ν=1^ℓ(β_ν⊗ Y_ν + γ_ν⊗ Y_ν^*) + ∑_i=1^N a_i⊗ E_ii, ℓ∈,where X_μ = X_μ^*∈^N× N are Hermitianrandom matrices with centered independent entries (up to the Hermitian symmetry) and Y_ν∈^N× N are random matriceswith centeredindependent entries; furthermore X_1, …, X_ℓ, Y_1, …, Y_ℓ are independent. The “coefficient” matrices α_μ, β_ν, γ_ν∈^L× L are deterministicand they are called structure matrices. Finally,a_1, …, a_N ∈^L× Lare also deterministic. We remarkthat the number of X_μ and Y_ν matrices effectively present inmay differ by choosing some structure matrices zero. Furthermore, note that = ∑_i=1^N a_i ⊗ E_ii, i.e., the deterministic matrices a_i encode the expectation of .Our main result asserts thatall eigenvalues of a Kronecker random matrixare contained inthe self-consistent -pseudospectrum for any>0, with a very high probability if N is sufficiently large. The self-consistent -pseudospectrum, _, isa deterministic subset of the complex plane that can bedefined and computedvia the self-consistent solutionto the Hermitized Dyson equation.Hermitization entails doubling the dimension and studying the matrix ^ζ defined in (<ref>) for any spectral parameter ζ∈ associated with. We introduce an additional spectralparameter z∈{ w ∈ w >0 } that will be associated with the Hermitian matrix ^ζ. The Hermitized Dyson equation is used to study the resolvent (^ζ-z)^-1.We first introduce some notation necessary to write up the Hermitized Dyson equation.For μ,ν∈ [ℓ], we defineα_μ[ 0 1; 0 0 ]⊗α_μ + [ 0 0; 1 0 ]⊗α_μ^*,β_ν[ 0 1; 0 0 ]⊗(β_ν + γ_ν^*) .We set s_ij^μ x_ij^μ^2,t_ij^ν y_ij^ν^2,where x_ij^μ and y_ij^ν are the (scalar) entries of the random matrices X_μ and Y_ν, respectively, i.e., X_μ=(x_ij^μ)_i,j=1^N and Y_ν = (y_ij^ν)_i,j=1^N. We define a linear mapon (^2 × 2⊗^L× L)^N, i.e., on N-vectors of(2L)× (2L)matrices as follows. For any = (r_1, …, r_N) ∈ (^2 × 2⊗^L× L)^N we set[] = ( _1[], _2[], … , _N[]) ∈ (^2 × 2⊗^L× L)^N,where the i-th component is given by _i[] ∑_k=1^N( ∑_μ=1^ℓ s_ik^μα_μ r_k α_μ + ∑_ν=1^ℓ( t_ik^νβ_ν r_k β_ν^* + t_ki^νβ_ν^* r_k β_ν) ) ∈^2 × 2⊗^L× L , i∈ [N].For j ∈ [N] and ζ∈, we define a_j^ζ∈^2× 2⊗^L× L througha_j^ζ[ 0 1; 0 0 ]⊗a_j + [ 0 0; 1 0 ]⊗a_j^* - [0ζ; ζ̅0 ]⊗𝕀. The Hermitized Dyson equation is the following system of equations- 1/m_j^ζ(z) = z𝕀- a_j^ζ + _j[^ζ(z)], j=1,2,… N,for the vector ^ζ(z)= ( m_1^ζ(z), … , m_N^ζ(z)) ∈ (^2 × 2⊗^L× L)^N.Here, 𝕀 denotes the identity matrix in ^2× 2⊗^L× L and ζ∈ as well as z∈ are spectral parametersassociated toand ^ζ, respectively.For any z∈ and ζ∈ there exists a unique solution to (<ref>) with the additional condition thatthe matricesm_j^ζ( z )1/2i(m_j^ζ(z) - m_j^ζ( z)^*) are positive definite for all j∈[N].Moreover, for j∈[N], there are measures v_j^ζ onwith values in the positive semidefinite matrices in ^2× 2⊗^L× L such thatm_j^ζ(z) = ∫_v_j^ζ(τ)/τ-zfor all z ∈ and ζ∈.Lemma <ref> is proven after Proposition <ref> below. Throughout the paper ^ζ will always denote theunique solution to the Hermitized Dyson equationdefined in Lemma <ref>. The self-consistent density of states ρ^ζ of ^ζ is given by ρ^ζ(τ) 1/2LN∑_j=1^Nv_j^ζ(τ) (cf. Definition <ref> below).The self-consistent spectrum of ^ζ is the setρ^ζ = ⋃_j=1^Nv_j^ζ. Finally, for any >0 the self-consistent -pseudospectrum ofis defined by_{ζ∈(0,ρ^ζ) ≤}. The eigenvalues ofwill concentrate on the set _ for any fixed >0 if N is large.The motivation for this definiton (<ref>) is that ζ is in the-pseudospectrum of if and only if0 is in the -vicinity of thespectrum of ^ζ, i.e., (0, (^ζ))≤.We recall that the-pseudospectrum _() of is defined through_() () ∪{ζ∈∖() (-ζ)^-1≥^-1}.In accordance with Subsection <ref>, · denotes the operator norm on ^L× L⊗^N× N induced by the Euclidean norm on ^L⊗^N. The precise statement is given in Theorem <ref> below whose conditions we collect next.* (Upper bound on variances) There is κ_1>0 such that s_ij^μ≤κ_1/N,t_ij^ν≤κ_1/Nfor all i,j ∈ [N] and μ,ν∈ [ℓ].* (Bounded moments) For each p ∈, p ≥ 3, there is φ_p >0 such that x_ij^μ^p ≤φ_p N^-p/2,y_ij^ν^p ≤φ_pN^-p/2for all i,j ∈ [N] and μ, ν∈ [ℓ].* (Upper bound on structure matrices) There is κ_2 >0 such thatmax_μ∈[ℓ]α_μ≤κ_2, max_ν∈[ℓ]β_ν≤κ_2,where α denotes the operator norm induced by the Euclidean norm on ^L. * (Bounded expectation) Let κ_3>0 be such that the matrices a_i ∈^L× L satisfymax_i=1^N a_i≤κ_3.The constants L, ℓ,κ_1, κ_2, κ_3 and (φ_p)_p∈ are called model parameters.Our estimates will be uniform in all models possessing the same model parameters, in particular the bounds will be uniform in N, the large parameter in our problem. Now we can formulate our main result: Fix L∈.Letbe aKroneckerrandom matrix as in (<ref>) such thatthe bounds (<ref>) – (<ref>) are satisfied.Then for each >0 and D>0, there is a constant C_,D >0 such that ( () ⊂_) ≥ 1- C_,D/N^D.The constant C_, D in (<ref>) only depends on the model parameters in addition toand D. * Theorem <ref> follows from the slightly stronger Lemma <ref> below; we show that not only the spectrum ofbut also its /2-pseudospectrumlies in the self-consistent -pseudospectrum.* By carefully following the proof of Lemma <ref>, one can see thatcan be replaced by N^-δ with a small universal constant δ >0. The constant C in (<ref>) will depend only on D and the model parameters. * (Only finitely many moments) If (<ref>) holds true only for p ≤ P and some P ∈ then there is a D_0(P) ∈ such thatthe bound (<ref>) is valid for all D ≤ D_0(P).* The self-consistent -pseudospectrum _ from (<ref>) is defined in terms of the support of the self-consistent density of states of the HermitizedDyson equation (<ref>). In particular, to determine _ one needs to solve the Dyson equation for spectral parameters z in a neighborhood of z=0.There is an alternative definition for a deterministic -regularized set that is comparable to _ and requires to solve the Dyson equation solely on the imaginary axis z=η, namelydef of wt D eps _=ζ: lim sup_η↓01/η max_jm_j^ζ(η)≥1/ .Hence, (<ref>) is true if _ is replaced by _.For more details we refer the reader to Appendix <ref>. * (Hermitian matrices)Ifisa Hermitian random matrix, =^*, i.e., α_μ = α_μ^* and β_ν^*=γ_νfor all μ,ν∈[ℓ] and a_i^*=a_i for all i ∈ [N],then the Hermitization is superfluous and the Dyson equation may be formulateddirectly for .One may easily show that the support of the self-consistent density of states ρ is the intersection of all self-consistent -pseudospectra: = ⋂_>0_. * Theorem <ref> as well as its stronger version for the Hermitian case,Theorem <ref>, identify a deterministic superset of the spectrum of .In fact, it is expected that for a large classof Kronecker matricesthe set ⋂_>0_ is the smallest deterministic setthat still contains the entire () up to a negligible distance. For L=1 thishas been proven for many Hermitian ensemblesand for thecircular ensemble.Example <ref>below presents numerics for the L≥ 2 case.Fix L ∈. Let ζ_1, …, ζ_L ∈and a ∈^L× L denote the diagonal matrix with ζ_1, …, ζ_L on its diagonal.We set a⊗𝕀 +, where has centered i.i.d. entries withvariance 1/(NL). Clearly,is a Kronecker matrix.In this casethe Dyson equation can be directly solvedand one easilyfinds that ⋂_>0_ = {ζ∈∑_i=1^L 1/ζ_i - ζ^2≥L }(To our knowledge, the formula on the r.h.s. first appeared in <cit.>). Figure <ref> shows the set (<ref>) and the actual eigenvalues offorN=8000 and different matrices a.The empirical density of states of a Hermitian matrix ∈^L× L⊗^N× N is defined through μ_(τ) 1/NL∑_λ∈()δ_λ(τ).Fix L∈.For N ∈, let _N ∈^L× L⊗^N× N be aHermitian Kroneckerrandom matrixas in (<ref>) such thatthe bounds (<ref>) – (<ref>) are satisfied. Then there exists a sequence of deterministic probabilitymeasures ρ_N on such that the difference of ρ_N and the empirical spectral measure μ__N, defined in (<ref>), of _Nconverges to zero weakly in probability, i.e., lim_N→∞∫_ f(τ)(μ__N- ρ_N)(τ) = 0for all f ∈ C_0() in probability. Here, C_0() denotes the continuous functions onvanishing at infinity. Furthermore, there is a compact subset ofwhich contains the supports of all ρ_N.This compact set depends only on the model parameters.Theorem <ref> is proven in Appendix <ref>.The measure ρ_N, the self-consistent density of states, can be obtained by solving thecorresponding Dyson equation, see Definition <ref> later. If the function f is sufficiently regular then our proof combined with the Helffer-Sjöstrand formulayields an effective convergence rate of order N^-δ in (<ref>). § SOLUTION AND STABILITY OF THE DYSON EQUATIONThe general matrix Dyson equation (MDE) has been extensively studied in<cit.>, but under conditions that exclude general Kronecker random matrices.Here, we relax these conditions and show how to extend some key results of <cit.>to our current setup.Our analysis of the MDE on the space of n× n matrices, = ^n× n, will then be appliedto (<ref>) with n=2LN=KN.On = ^n× n, we use the norms as defined in Subsection <ref> and require the pair (, ) to have the following properties: We call (, ) a data pair if* The imaginary part = 1/2(-^*) of the matrix ∈^n × nis negative semidefinite.* The linear operator : ^n× n→^n× n is self-adjoint with respect to the scalar productRT1/n[R^* T] ,and preserves the cone of positive semidefinite matrices, i.e. it is positivity preserving. For any data pair (, ), the MDE then takes the form- ^-1(z) = z𝕀-+ [(z)],z ∈,for a solution matrix (z) ∈^n × n. It was shown in this generality thatthe MDE, (<ref>), has a unique solution under the constraint that the imaginary part(z)((z)-(z)^*)/(2) is positive definite <cit.>.We remark thatbeing negative semidefinite is the most general condition for which our analysis is applicable.Furthermore, in <cit.>, properties of the solution of (<ref>) and the stability of (<ref>) against small perturbationswere studied in the general setup with Hermitian A and under the so-called flatness assumption,matrix flatnessc/n () ≤[] ≤C/n () , for all positive definite ∈^n × n with some constants C > c>0. Within Section <ref> we will generalize certain results from <cit.> by dropping the flatness assumption (<ref>) and the Hermiticity of A.The results in this section, apart from (<ref>) below,follow by combining and modifying several arguments from <cit.>. We will only explain the main steps and refer to <cit.> for details. At the end of the section we translate these general results back to the setup of Kronecker matrices with the associated Dyson equation (<ref>).§.§ Solution of the Dyson equationAccording to Proposition 2.1 in <cit.> the solutionto (<ref>) has a Stieltjes transform representationmatrix Stieltjes transform rep (z)=∫_V(τ)/τ-z , z ∈ℍ ,where V is a compactly supported measure onwith values in positive semidefinite n × n-matrices such that V()=𝕀, providedis Hermitian. The following lemma strengthens the conclusion about the support properties for this measure compared to Proposition 2.1 in <cit.>.Let (,) be a data pair as in Definition <ref> and : ℍ→^n × n be the unique solution to (<ref>) with positive definite imaginary part. Then (i) There is a unique measure V onwith values in positive semidefinite matrices and V()=𝕀 such that (<ref>) holds true.(ii) Ifis Hermitian, then V⊂ A + [-2 ^1/2, 2 ^1/2],A⊂ V + [-^1/2, ^1/2]. The representation (<ref>) follows exactly as in the proof of Proposition 2.1 in <cit.> even forwith negative semidefinite imaginary part. We now prove (<ref>) motivated by the same proof in <cit.>. For a matrix ∈^n × n, its smallest singular value is denoted by ().Note that (z-) = (z, ) sinceis Hermitian. In the following, we fix z ∈ such that (z,) = (z-) > 2^1/2.Under the condition (z)≤(z-)/(2), we obtain from (<ref>) (z)= 1/(z- + [(z)])≤1/(z-) - (z)≤2/(z,).Therefore, using (z-) > 2^1/2, we find a gap in the values (z) can achieve(z)∉(2/(z-), (z-)/2).For large values of η=z, (z) is smaller than the lower bound of this interval.Thus, since (z) is a continuous function of z and the set { w∈(w,) > 2^1/2} is path-connected, we conclude that (<ref>) holds true for all z ∈ satisfying (z,)> 2^1/2. We take the imaginary part of (<ref>) and use =^* to obtain = η^*+ ^* [] . Solving this relation forand estimating its norm yields ≤η^2/1- ^2≤4η/(z,)^2 - 4 .Here, we employed ^2<1 by (<ref>) and (z,)> 2^1/2. Hence,converges to zero locally uniformly on the set { z∈(z, ) > 2 ^1/2} for η↓ 0.Therefore, E ∉V if (E, ) > 2 ^1/2.This concludes the proof of (<ref>).We now prove (<ref>). From (<ref>), we obtain -z𝕀 = ^-1( 𝕀 + [])for z ∈. Since V() = 1, we have≤1/(z,V).Therefore, taking the inverse in (<ref>) and applying (<ref>) yield (-z𝕀)^-1≤1/(z,V)(1-(z, V)^-2)for all z ∈ satisfying (z, V)^2 >. Taking z ↓ 0 in (<ref>), we see that the matrix - E𝕀is invertible for all E ∈ satisfying (E, V)^2 >, showing (<ref>). In accordance with Definition 2.3 in <cit.> we define the self-consistent density of states as the unique measure whose Stieltjes transform is n^-1.[Self-consistent density of states]The measure ρ(τ) 1/n(τ) = (τ)is called the self-consistent density of states. Clearly, =.For the following lemma, we also define the harmonic extension of the self-consistent density of states ρ→_+ throughρ (z) 1/π(z). In the following we will use the short hand notation (z,) .Let (, ) be a data pair as in Definition <ref>.* For z∈, we have the bounds≤1/,( z)^-1^-2𝕀≤≤ z/[2]𝕀,^-1≤z ++.* For z∈, we have the boundρ(z) ≤ z/π[2]. Using (<ref>) immediatelyyields (<ref>) and the upper bound in (<ref>) sinceV() = 𝕀.With η =z and taking the imaginary part of (<ref>), we obtain = η^*- ^*()+ ^* [] ≥η^*as ≤ 0, ≥ 0 andis positivity preserving. Since ^*≥^-1^-2𝕀 for any ∈^n × nthe lower bound in (<ref>) follows.From (<ref>), we obtain (<ref>). Since ρ(z) = π^-1(z) the upper bound in (<ref>) implies (<ref>). §.§ Stability of the Dyson equation The goal of studying the stability of the Dyson equation in matrix form, (<ref>), is to show that if somesatisfies -𝕀 = (z𝕀 -+ []) +for some small , then is close to . It turns out that to a large extent this is a question about the invertibility of the stability operator - [] acting on ^n × n. From (<ref>) and (<ref>), we obtain the following equation[- ] =+ [-](-)relating the difference - with .We will call (<ref>) the stability equation. Under the assumption thatis not too far from , the question whether - is comparable withis determined by the invertibility ofin (<ref>) and the boundedness ofthe inverse.In this subsection, we show that ^-1 is bounded, provided (z,V) is bounded away from zero. In order to prove this bound on ^-1, we follow the symmetrization procedure forintroduced in <cit.>. We introduce the operators _^n × n→^n × n and ^n × n→^n × n through_[] = ,__√()_√()_,for ∈^n× n.Furthermore, the matrix ∈^n× n, the unitary matrix ∈^n × n and the positive definite matrix ∈^n × n are defined through_√()^-1 [ ] - 𝕀, /,^1/2.With these notations, a direct calculation yields =- _ = _√()__^*( _ - ) _^-1_√()^-1,as in (4.39) of <cit.>.We remark that _ for ∈^n × n is invertible if and only ifis invertible and _^-1 = _^-1 in this case. Similarly, _^* = _^*. Our goal is to verify ≤ 1- c for some positive constant c which yields (_ - )^-1≤ c^-1 as _ = 1.Then the boundedness of the other factors in (<ref>) implies the bound on the inverse of the stability operator .[Comparison relation]For nonnegative scalars or vectors f and g, we will use the notation f ≲ g if there is a constant c>0, depending only on_hs→ such that f ≤ cg andf ∼ g if f ≲ g and f ≳ g both hold true.If the constant c depends on an additional parameter (e.g. >0), then we will indicate this dependence by a subscript (e.g. ≲_).Let (, ) be a data pair as in Definition <ref>.* Uniformly for any z ∈, we have [4] ^-1^-2𝕀≲^4 ( z)^2≲^2^-1^4𝕀. * There is a positive semidefinite ∈^n × n such that = 1 and []=. Moreover, 1 - = ( z)_[]/^-2 . * Uniformly for z ∈, we have1 -≳[4] ^-1^-4. The proof of this lemma is motivated by the proofs of Lemma 4.6 and Lemma 4.7 (i) in <cit.>. We set η z. We rewrite the definition ofand use the upper bound in (<ref>) to obtain ^4 = ^-1_√()(_+_) [()^-1] ≥η^-1[2] ^-1_√()[^* + ^*]≳^-1^-2η^-2[4]𝕀.Here, we also applied ^* + ^*≥ 2 ^-1^-2𝕀 and the upper bound in (<ref>) again.This proves the lower bound in (<ref>). Similarly, using ^* + ^*≤ 2 ^2 𝕀 and the lower bound in (<ref>) we obtain the upper bound in (<ref>).For the proof of (ii), we remark thatpreserves the cone of positive semidefinite matrices. Thus, by a version of the Perron-Frobenius theoremof cone preserving operators there is a positive semidefinitesuch that = 1 and =.Following the proof of (4.24) in <cit.> and noting that this proof uses neither the uniqueness ofnor its positive definiteness, we obtain (<ref>).The bound in (<ref>) is obtained by plugging the lower bound in (<ref>) and the lower bound in (<ref>) into (<ref>).We start by estimating the numerator in (<ref>). Using ≥ 0, the cyclicity of the trace, (<ref>) and the lower bound in (<ref>), we get_[]≥η√()^2 √()^-1^-2≳^-1^-3[2] .Similarly, we have^-2 = √()^-2√()≲η/[2]^-1.Combining (<ref>) and (<ref>) in (<ref>) yields (<ref>) and concludes the proof of the lemma. Let (, ) be a data pair as in Definition <ref>.*The stability operatoris invertible for all z ∈. For fixed E ∈ and uniformly for η≥max{1,E,}, we have^-1(E+η)≲ 1.* Uniformly for z∈, we have^-1(z)≲(z)^-1(z)^9/[8]. * Uniformly for z∈, we have ^-1(z) + (^-1(z))^*≲ 1 + (z)^2 + (z)^4 ^-1(z).We start with the proof of (<ref>). From the upper and lower bounds in (<ref>) and (<ref>), respectively, we obtain _≲1/η^-1^2,_^-1≲η/[2]^-1, _√()≲η/[2], _√()^-1≲1/η^-1^2. Since _≤_ for Hermitian ∈^n × n we conclude from (<ref>), (<ref>) and (<ref>)^-1≲^-1^5/[4](_ - )^-1≲^-1^9/[8]. For the proof of (<ref>), we remark that _hs→≲ 1 implies _→hs≲ 1.Therefore, exactly as in the proof of (4.53) in <cit.>, we obtain the first bound in (<ref>). We similarly conclude the second bound from (^-1)^* = ^-1. We conclude the proof of Lemma <ref> by remarking that (<ref>) is a consequence of (<ref>), (<ref>), (<ref>) and (<ref>). [Lipschitz-continuity of ]If (, ) is a data pair as in Definition <ref> then there exists c>0 such that for each (possibly N-dependent) ∈(0,1] we have (z_1)-(z_2)≲ (^-c + ^c) z_1-z_2for all z_1, z_2 ∈ such that z_1,z_2 ≥.We differentiate (<ref>) with respect to z and obtain [_z ] = ^2. We invert , use (<ref>), (<ref>) and (<ref>) and follow the proof of (<ref>).This yields (<ref>) and hence concludes the proof of Corollary <ref>.§.§ Translation to results for Kronecker matricesHere we translate the results of Subsections <ref> and <ref> into results about (<ref>). In fact, we study (<ref>) in a slightly more general setup. Motivated by the identification ^2× 2⊗^L× L≅^2L × 2L,we consider (<ref>) on ^K× K for some K ∈ instead. The results of Subsections <ref> and <ref> areapplied with n=KN. Moreover, the special a_j^ζ defined in (<ref>) are replaced by general a_j∈^K× K.Therefore, the parameter ζ will not be present throughout this subsection. We thus look at the Dyson equation in vector form - 1/m_j(z) = z𝕀- a_j + _j[(z)],where z∈, m_j(z)∈^K× K for j ∈ [N], (z) (m_1(z), … m_N(z)) and _j is defined as in (<ref>). Recall that the definition of _j involves coefficients s_ij^μ and t_ij^ν as well as matrices α_μ and β_ν. Next, we formulateassumptions onin terms of these data as well as assumptions on a_1, …, a_N. * For all μ, ν∈ [ℓ] and i, j ∈ [N], we have nonnegative scalars s_ij^μ∈ and t_ij^ν∈ satisfying (<ref>).Furthermore, s_ij^μ = s_ji^μ for all i,j ∈ [N] and μ∈ [ℓ]. * For μ,ν∈ [ℓ], we have α_μ, β_ν∈^K× K and α_μ is Hermitian. There is α^* >0 such thatmax_μ∈ [ℓ]α_μ≤α^*, max_ν∈ [ℓ]β_ν≤α^*. * The matrices a_1, …, a_N ∈^K× K have a negative semidefinite imaginary part, a_j ≤ 0.The conditions in (i) of Assumptions <ref> are motivated by the definition of the variances in (<ref>).In particular, since X_μ is Hermitian the variances from (<ref>) satisfy s_ij^μ = s_ji^μ. In order to apply the results of Subsections <ref> and<ref> to (<ref>), wenow relate it to the matrix Dyson equation (MDE) (<ref>). It turns out that (<ref>) is a special case when the MDE on = ^K× K⊗^N× Nis restricted to the block diagonal matrices { a⊗ D a ∈^K× K, D ∈^N× N diagonal}⊂.We recall E_ll, _l and P_ll from (<ref>), (<ref>) and (<ref>),respectively, and define ∈ and → through∑_l=1^N a_l ⊗ E_ll, [] ∑_l=1^N _l[(P_11, …, P_NN)] ⊗ E_ll.With these definitions, the Dyson equation in vector form, (<ref>), can be rewritten in the matrix form(<ref>) for a solution matrix ∈.In the following, we will refer to (<ref>) with these choices of ,andas the Dyson equation in matrix form.In the remainder of the paper, we will consider the Dyson equation in matrix form, (<ref>), exclusively with the choices ofandfrom (<ref>). We have the following connection between (<ref>) and (<ref>).Ifis a solution of (<ref>) then,since the range ofis contained inand ∈, we have ∈, i.e, it can be written as(z) = ∑_j=1^N m_j(z) ⊗ E_jjfor some unique m_1(z), …, m_N(z) ∈^K × K. Moreover, these m_i solve (<ref>). Conversely, if =(m_1, …, m_N) ∈(^K× K)^Nsolves (<ref>) thendefined via (<ref>) is a solution of (<ref>). Furthermore, ifsatisfies (<ref>) then is positive definite if and only if m_j is positive definite for all j ∈[N].This correspondence yields the following translation of Lemma <ref> to the setting for Kronecker random matrices, Proposition <ref> below.For part (ii), we recall =max_i=1^Nr_i for =(r_1, …, r_N) ∈ (^K× K)^N and thatdenotes theoperator norm of (^K× K)^N→ (^K× K)^N induced by . We also used that =, which is easy to seesince = on the block diagonal matrices (^K× K)^N ≅ and =0 on the orthogonal complement ^⊥. The orthogonal complement is defined with respect to the scalar product onintroduced in (<ref>). Furthermore, we remark that the identity (<ref>) implies = . [Existence, uniqueness of ]Under Assumptions <ref> we have* There is a unique function → (^K× K)^N such that the components (z)=(m_1(z), …, m_N(z)) satisfy (<ref>) for z∈ and all j∈[N]and m_j(z) is positive definite for all z∈ and all j ∈ [N]. Furthermore, for each j ∈ [N], there is a measure v_j onwith values in the positive semidefinite matrices of ^K× K such that v_j() = 𝕀 and for all z ∈, we have m_j(z) = ∫_v_j(τ)/τ -z . * If a_j is Hermitian, i.e., a_j=a_j^* for all j ∈ [N] then the union of the supports of v_j iscomparable with the union of the spectra of the a_j in the following sense ⋃_j=1^N v_j ⊂ ⋃_j=1^Na_j + [-2 ^1/2, 2 ^1/2],⋃_j=1^Na_j ⊂ ⋃_j=1^Nv_j + [-^1/2, ^1/2].Using the identification ^2× 2⊗^ L×L≅^K× K for K = 2 L and the definitions in (<ref>) and(<ref>), the lemma is an immediate consequence of Proposition <ref> with a_j = a_j^ζ for j ∈ [N] since the proof of the proposition only uses the qualitative conditions in Assumptions <ref>. Proposition <ref> asserts that there is a measure V_ onwith values in the positive semidefinite elements of ⊂such that for z∈, we haveV_(τ) ∑_j=1^N v_j(τ)⊗ E_jj, (z) = ∫_1/τ - zV_(τ).Clearly, we have V_= for the unique measurewith values in positive semidefinite matrices that satisfies (<ref>). And we have V_= with the self-consistent density of states defined in (<ref>). Note that in this setupρ(τ) = 1/NK∑_j=1^Nv_j(τ) ,with the ^K × K-matrix valued measures v_j defined through (<ref>).In the remainder of the paper, =(m_1, …, m_N) andalways denote the unique solutions of (<ref>) and (<ref>), respectively,connected via (<ref>).We now modify the concept of comparison relation introduced in Convection <ref> so that inequalities are understood up to constants depending only on the model parameters from Assumption <ref>. [Comparison relation]From here on we use the comparison relation introduced in Convection <ref> so that the constants implicitly hidden in this notation maydepend only onK, ℓ, κ_1 from (<ref>) and α^* from (<ref>). Assumptions <ref> imply≲1,≲1. Direct estimates of [] for ∈(^K× K)^N starting from the definition of _i, (<ref>), and using the assumptions(<ref>) and (<ref>) yield the bounds in (<ref>). Similarly to , we now introduce the stability operator of the Dyson equation in vector form, (<ref>).In fact, it is defined through(^K× K)^N → (^K× K)^N,(r_1, …,r_N) (r_i - m_i _i[]m_i)_i=1^N.We remark thatand thusleave the set of block diagonal matricesdefinedin (<ref>) invariant.The operatorsandare the restrictions ofandto . In particular, we have ^-1≤^-1, ^-1≤max{1, ^-1}, ^-1≤^-1,sinceacts as the identity map on the orthogonal complement ^⊥ of the block diagonal matrices. Here, the orthogonal complement is defined with respect to the scalar product onintroduced in (<ref>). Moreover,is invertible if and only ifis invertible.Using (<ref>) the bounds onfrom Lemma <ref> can be translated into bounds on § HERMITIAN KRONECKER MATRICESThe analysis of a non-Hermitian random matrix usually starts with Girko's Hermitization procedure.It provides a technique to extract spectral information about anon-Hermitian matrix from a family of Hermitian matrices (^ζ)_ζ∈ defined through^ζ[ 0 1; 0 0 ]⊗ + [ 0 0; 1 0 ]⊗^* - [0ζ; ζ̅0 ]⊗𝕀, ζ∈.Applying Girko's Hermitization procedure to a Kronecker random matrixas in (<ref>) generates a Hermitian Kronecker matrix ^ζ∈^2× 2⊗^L× L⊗^N× N.However, similarly to our analysis in Section <ref>, we study more general Kronecker matrices ∈^K× K⊗^N× N as in (<ref>) below for K, N∈. This is motivated by the identification ^2× 2⊗^L× L≅^2L× 2L. For K, N ∈, let the random matrix ∈^K× K⊗^N× N be defined through∑_μ=1^ℓα_μ⊗ X_μ + ∑_ν=1^ℓ( β_ν⊗Y_ν+ β_ν^* ⊗ Y_ν^* ) + ∑_i=1^N a_i ⊗ E_ii.Furthermore, we make the following assumptions.Let ℓ∈.For μ∈ [ℓ], let α_μ∈^ K×K be a deterministic Hermitian matrix and X_μ = X_μ^* ∈^N × N a Hermitian random matrix with centered and independent entries (up to the Hermitian symmetry constraint).For ν∈ [ℓ], let β_ν∈^ K×K be a deterministic matrix and Y_ν a random matrix with centered and independent entries.We also assume that X_1,…, X_ℓ, Y_1, …, Y_ℓ are independent. Let a_1, …, a_N ∈^K× K be some deterministic matrices with negative semidefinite imaginary part. We recall that E_ii was defined in (<ref>) and introduce the expectation =∑_i=1^N a_i ⊗ E_ii. Ifis a Hermitian matrix thenas in (<ref>) with the above properties is a Hermitian Kronecker random matrix in the sense of Definition <ref>.As in the setup from (<ref>), the matrices α_1, …α_ℓ,β_1, …, β_ℓ are called structure matrices.Since the imaginary parts of a_1, …, a_N are negative semidefinite, the same holds true for the imaginary part ofand . Hence, the matrix -z𝕀 is invertible for all z ∈. For z∈, we therefore introduce the resolvent (z) ofand its “matrix elements” G_ij(z) P_ij∈^K× K for i,j∈[N] defined through(z) ( - z 𝕀 )^-1, (z) = ∑_i,j=1^N G_ij(z) ⊗ E_ij.We recall that P_ij has been defined in (<ref>). Our goal is to show that G_ij is small for i≠ j and G_ii is well approximated by the deterministic matrix m_i(z) ∈^K× K in the regime where K ∈ is fixedand N ∈ is large.Apart from the above listed qualitative assumptions, we will need the following quantitative assumptions.To formulate them we use the same notation as before, i.e., the entries of X_μ and Y_ν are denoted by X_μ=(x_ij^μ)_i,j=1^N and Y_ν=(y_ij^ν)_i,j=1^Nand their variances by s_ij^μx_ij^μ^2 and t_ij^νy_ij^ν^2 (cf. (<ref>)).We assume that all variances s_ij^μ and t_ij^μ satisfy (<ref>) and the entries x_ij^μ and y_ij^ν of the random matrices fulfillthe moment bounds (<ref>).Furthermore, the structure matrices satisfy (<ref>). In this section, the model parameters are defined to be K, ℓ, κ_1 from (<ref>), the sequence (φ_p)_p ∈ from (<ref>)and α^* from (<ref>), so the relation ≲ indicates an inequality up to a multiplicative constant depending on these model parameters.Moreover, for the real and imaginary part of the spectral parameter z we will write E= z and η =z, respectively. §.§ Error term in the perturbed Dyson equationWe introduce the notion of stochastic domination, a high probability bound up to N^ factors. [Stochastic domination]If Φ = (Φ^(N))_N and Ψ = (Ψ^(N))_Nare two sequences of nonnegative random variables, then we say that Φ is stochastically dominated by Ψ, Φ≺Ψ, if for all >0 and D>0there is a constant C(,D) such that( Φ^(N)≥ N^Ψ^(N)) ≤C(,D)/N^Dfor all N ∈ and the function (, D) ↦ C(,D) depends only on the model parameters.If Φ or Ψ depend on some additional parameter δ and the function (,D) ↦ C(,D) additionally depends on δ then we write Φ≺_δΨ.We set h_ij P_ij∈^K × K. Using P_lm = a_l δ_lm,x_ik^μ = 0, y_ik^ν=0, (<ref>), (<ref>) and(<ref>) we trivially obtain P_ik( -) = h_ik-a_iδ_ik≺ N^-1/2.For B ⊂ [N] we set ^B ∑_i, j = 1^N h_ij^B ⊗ E_ij,h_ij^Bh_ij(i,j ∉ B),and denote the resolvent of ^B by ^B(z) ( ^B - z 𝕀)^-1 for z ∈. Since ^B = ^B ≤ 0 for B ⊂ [N], the matrix (^B - z𝕀) is invertiblefor all z ∈ and ^B(z)≤1/ z. In the following, we will use the convention ∑_k ∈ A^B ∑_k ∈ A ∖ Bfor A, B ⊂ [N] and B ⊂ A.If A = [N] then we simply write ∑_k^B. For i ∈[N], starting from the Schur complement formula, -1/G_ii = z - h_ii + ∑_k,l^{i} h_ikG_kl^{i}h_li,and using the definition of _i in (<ref>), we obtain the perturbed Dyson equation-1/g_i =z-a_i+ _i[]+d_i.Here, we introduced g_i G_ii, (g_1,…, g_N)∈ (^K × K)^Nand the error term d_i∈^K× K.We remark that (<ref>) is a perturbed version of the Dyson equation in vector form, (<ref>), and recall thatdenotes itsunique solution (cf. Proposition <ref>).To represent the error term d_i in (<ref>), we use h_ik = a_i δ_ik + ∑_μ x_ik^μα_μ + ∑_ν(y_ik^νβ_ν + y_ki^νβ_ν^* )and writed_i d_i^(1) + … + d_i^(8), where d^(1)_i - h_ii+a_i, d^(2)_i∑_k^{i}( ∑_μα_μ G_kk^{i}α_μ( x_ik^μ^2 -s_ik^μ) + ∑_ν( (y_ik^ν^2 - t_ik^ν) β_ν G_kk^{i}β_ν^* + (y_ki^ν^2 - t_ki^ν) β_ν^*G_kk^{i}β_ν)) , d^(3)_i∑_ν∑_k^{i}( y_ik^νβ_ν G_kk^{i}β_ν y_ki^ν+ y_ki^νβ_ν^* G_kk^{i}β_ν^* y_ik^ν) d^(4)_i( ∑_μ = μ'∑_k ≠ l^{i} + ∑_μ≠μ'∑_k,l^{i}) α_μ x_ik^μ G_kl^{i} x_li^μ'α_μ' ,d^(5)_i(∑_ν = ν'∑_k≠ l ^{i} + ∑_ν≠ν'∑_k,l^{i})( y_ik^νβ_ν + y_ki^νβ_ν^* ) G_kl^{i}( y_li^ν'β_ν' + y_il^ν'β_ν'^* ),d^(6)_i∑_k,l^{i}∑_μ∑_ν( α_μ x_ik^μ G_kl^{i}( y_li^νβ_ν + y_il^νβ_ν^* )+ (y_ik^νβ_ν + y_ki^νβ_ν^* ) G_kl^{i} x_li^μα_μ), d^(7)_i∑_k^{i}( ∑_μα_μ s_ik^μ( G_kk^{i} - G_kk)α_μ+∑_ν( t_ik^νβ_ν(G_kk^{i}- G_kk) β_ν^* + t_ki^νβ_ν^*(G_kk^{i} - G_kk)β_ν) ),d^(8)_i -( ∑_μ s_ii^μα_μ G_iiα_μ+∑_ν t_ii^ν(β_ν G_iiβ_ν^* +β_ν^* G_iiβ_ν)).In the remainder of this section, we consider E= z to be fixed and view quantities likeandonlyas a function of η= z.In the following lemma, we will use the following random control parameters to bound the error terms introduced in (<ref>):Λ_hs(η)1/N[ (E+η)^*(E+η) ]^1/2Λ_w(η)1/√(2N)max_i=1^N [P_ii[(E+η)^*(E+η)+(E+η)(E+η)^*]]^1/2, Λ(η) max_i,j=1^N G_ij(E+η)- m_i(E+η)δ_ij . We remark that due to our conventions, we have= max_i=1^N m_i, ^-1 = max_i=1^N m_i^-1. * Uniformly for η≥ 1 and i≠ j, we haved_i≺1,G_ij≺ η^-2.* Uniformly for η >0, we have(d^(1)_i + … + d^(6)_i) χ ≺ 1/√(N)+Λ_hs+ ^-1Λ_w^2,(d^(7)_i + d^(8)_i) χ ≺ ^-1Λ_w^2 + 1/NG_ii, where χ is the characteristic function χ(Λ≤ (4^-1)^-1). Moreover, uniformly for η>0 and i≠ j, we haveG_ij χ ≺ Λ_w. In the proof of Lemma <ref>, we use the following relation between the entries of ^T and ^T∪{k} G_ij^T = G_ij^T∪{k} + G_ik^T 1/G_kk^T G_kj^Tfor T ⊂ [N], k ∉ T and i,j ∉ T∪{k}. This is an identity of K× K matrices and 1/G_kk^T is understood as the inverse matrix of G_kk^T. The proof of (<ref>) follows from the Schur complement formula.We will prove the bounds in (<ref>) in parallel with the estimated^(1)_i +… + d^(8)_i ≺ 1/√(N) + 1/N( ∑_k,l^{i}G_kl^{i}^2)^1/2 + 1/N∑_k^{i}G^{i}_kk + 1/N∑_k G_kkthat we will use to show (<ref>).The trivial estimate (<ref>) implies that d^(1)_i≺ 1/√(N). In the remaining part of the proof, we will often apply the large deviation bounds with scalar valued random variables from Theorem C.1 in <cit.>.In our case, they will be applied to sums or quadratic forms of independent random variables, whose coefficients are K× K matrices;this generalization clearly follows from the scalar case <cit.> if applied to each entry separately.We first show the following estimated^(2)_i +d^(3)_i ≺ 1/√(N)1/N∑_k^{i}G_kk^{i}^2^1/2.From the linear large deviation bound (C.2) in <cit.>, we conclude that the first term in (<ref>) is bounded by ∑_μα_μ∑_k^{i} G_kk^{i} (x_ik^μ^2 -s_ik^μ)α_μ≺1/N(∑_k^{i}G_kk^{i}^2)^1/2. The second and third term in (<ref>) are estimated similarly with the help of (C.2) in <cit.> which yields (<ref>) for d^(2)_i. We apply the linear large deviation bound (C.2) in <cit.> and bound the first term in (<ref>) as follows:∑_ν(∑_k^{i} y_ik^ν y_ki^νβ_ν G_kk^{i}β_ν)≺1/N(∑_k^{i}G_kk^{i} ^2 )^1/2. The bound on the second term in (<ref>) is obtained in the same way. Consequently, we have proved (<ref>).Using the quadratic large deviation bounds (C.4) and (C.3) in <cit.>, we obtain d^(4)_i +d^(5)_i+d^(6)_i ≺ 1/N^2∑_k,l^{i}G_kl^{i}^2^1/2.Moreover, (<ref>) and (<ref>) also imply that d^(2)_i+ … +d^(6)_i are bounded by the second term on the right-hand side of (<ref>).Using (<ref>), (<ref>) and(<ref>), we conclude d^(7)_i ≲ min{1/N∑_k^{i}G_ki1/G_iiG_ik, 1/N∑_k^{i}( G^{i}_kk + G_kk)}.The assumptions (<ref>) and (<ref>) imply d_i^(8)≲G_ii/N.This concludes the proof of (<ref>). Applying (<ref>) to (<ref>), we obtain (<ref>).For all k,l ∉{i}, we now show that G_kl^{i}χ≤G_kl + 4/3^-1G_kiG_il.This immediately yields (<ref>) using (<ref>) and (<ref>). For the proof of (<ref>), we conclude from (<ref>) by dividing and multiplying the second term by m_i thatG_kl^{i} = G_kl - G_ki1/G_iim_i 1/m_i G_il.From the definition of χ in Lemma <ref>, we see that1/m_iG_ij-δ_i j χ≤1/4 , 1/G_iim_i χ ≤ 4/3 ,which proves (<ref>) and hence (<ref>).Since (<ref>) is established for d_i^(8) (cf. (<ref>)),it suffices to use the second bound in (<ref>) to finish the proof of (<ref>) by estimating d_i^(7) via the first term in (<ref>).We now show (<ref>) and (<ref>). The identity G_i j =-∑_k^{j}G_ik^{j} h_k jG_jjand the linear large deviation bound (C.2) in <cit.> imply G_ij ≺ 1/N∑_k^{j}G_ik^{j}^2^1/2G_jj.Using (<ref>) to estimate G_ik^{j} and G_jj, we obtain (<ref>).Applying the estimate (<ref>) and the definition of χ in (<ref>) yield G_ijχ≺G_jjχΛ_w. Hence, the second bound in (<ref>) implies (<ref>) and conclude the proof of Lemma <ref>.For the following computations, we recall the definition of the product and the imaginary part on (^K× K)^N from (<ref>)and (<ref>), respectively. The proof of the following Lemma <ref> is based on inverting the stability operator in the difference equationdescribing - in terms of . We derive this equation first.Subtracting (<ref>) from (<ref>) and multiplying the result from the left by m_i andfrom the right by g_i yieldg_i - m_i = m_i _i[-] m_i + m_i d_i g_i + m_i _i[-](g_i - m_i)for i ∈ [N].Introducing =(d_1, …, d_N) ∈ (^K× K)^N as well as recalling [] = (_i[])_i=1^N, the definition of _i from (<ref>)and [] =- [] from (<ref>), we can write(-) =+ [-] (-).Sinceis invertible for z∈ by Lemma <ref> (<ref>) and (<ref>),applying the inverse ofon both sides of (<ref>) and estimating the norm yields- ≤ ^-1(+ -^2) We recall the definition of ρ from (<ref>).* Uniformly for η≥max{1, E, }, we haveΛ≺η^-2. * Uniformly for η> 0, we have- χΛ≤ϑ ≺ ^-1^2(1/√(N)+Λ_hs+^-1Λ_w^2 ),wheredef of vartheta ϑ 1/4( ^-1 + ^-1) . * Let a_1, …, a_N be Hermitian. We defineψ ^-1^2^-11/N η ,φ ^-1^2(1/√(N)+ √(ρ/N η) + ^-1^21/N η+^-1/Nη)+( √(/N η)+ /Nη) .Then for all δ>0 and uniformly for all η>0 such that ψ(η) ≤ N^-δ we have Λ χΛ≤ϑ ≺_δ φ .Note that the proof of (iii) of Lemma <ref> requiresto be Hermitian because of the use of the Ward identity, (η)^*(η) =η^-1(η).The Ward identity implies P_ii^* = P_ii^* =G_ii/η and hence,Λ_hs = √(G/N η),Λ_w = max_i√( G_ii/N η).We start with the proof of (<ref>).We remark that +≤ 2/η by (<ref>) and (<ref>).Therefore, for η≥max{1,E,}, we conclude from (<ref>) that -≲1/η^2 + 1/η^3.Here, we also used (<ref>), (<ref>) and (<ref>).Since ≺ 1 by (<ref>), we get-≺η^-2 in this η-regime. Hence, combined with the bound (<ref>) for the offdiagonal terms,we obtain (<ref>). For the proof of (ii), we also start from (<ref>). Since 2^-1ϑ≤1 by definition of ϑ (cf. (<ref>)) and (Λ≤ϑ) ≤^-1(Λ≤ϑ)≤ 4/3by the second bound in (<ref>), we conclude that Lambda stability bound - χΛ≤ϑ ≤ 8 ^-1/3.Applying (<ref>) to the right hand side and using G_ii≤√(N)Λ_hs, we obtain (<ref>).For the proof of (iii), let nowbe Hermitian. Therefore, (<ref>) is applicable and yieldsΛ_hs = √(G/N η)≲√(ρ/N η) + 1/1/N η + - , Λ_w^2 = (max_i=1^N√( G_ii/N η))^2 ≤/N η +-/N η.Here, we used ≤ + -, = πρ and Young's inequality as well as introduced an arbitrary >0 in the first estimate.We plug these estimates into the right-hand side of (<ref>) and choose N^-γ/(^-1^2) for arbitrary γ>0. Thus, we can absorb - in the estimate on Λ_hs into the left-hand side of (<ref>). Similarly, using ψ(η) ≤ N^-δ we absorb - in the estimate on Λ_w into the left-hand side of (<ref>).This yields (<ref>) for the contribution of the diagonal entries to Λ.For the offdiagonal entries, we use the second relation in (<ref>) and get as before Λ_w = max_i=1^N√( G_ii/N η)≤√(/Nη) + 1/1/Nη + Λ.Using this estimate in (<ref>) and choosing N^-γ/ to absorb Λ into the left-hand side,we obtain (<ref>) for diagonal and offdiagonal entries of . This concludes the proof of Lemma <ref>. Suppose for some deterministic control parameter 0<Φ≤ N^- a local law holds in the formΛ ≺ Φ/^-1 .Then for any deterministic c_1, …, c_N∈^K× K with max_ic_i≤ 1 we have1/N∑_i=1^N c_i^*(G_ii -m_i) ≺ (^-1)^*Φ^2/^-1^2 + max1/√(N),ΦΦ+ ^2/N + Λ^2_w^-1.In (<ref>), the adjoint of ^-1 is understood with respect to the scalar product(·), where we defined the dot-product ·for =(x_1, …, x_N), = (y_1, …, y_N) ∈ (^K× K)^Nvia·1/N∑_i=1^N x_i^* y_i∈^K× K.It is easy to see that ·^-1 = ((^-1)^* ) ·.We set (c_1, …, c_N) and recall = (G_11, …, G_NN) ∈ (^K× K)^N. Using (<ref>), we compute 1/N∑_i=1^N c_i^* (G_ii-m_i)= · (-) = (^*(^-1)^*[])· ( + [-](-)).We rewrite the termnext. Indeed, a straightforward computation starting from the Schur complement formula (<ref>)shows that d_i G_ii= (Q_i 1/G_ii)G_ii + (d^(7)_i +d^(8)_i)G_ii=(Q_i 1/G_ii)m_i+ (Q_i 1/G_ii)( G_ii-m_i) + (d^(7)_i +d^(8)_i)G_ii,where we defined Q_i ZZ - _i Z and the conditional expectation _i Z[ Z | ^{i}]=[ Z | { x^μ_kl, y^ν_kl k, l ∈ [N]∖{i}, μ,ν∈ [ℓ]}] for any random variable Z.The advantage of the representation (<ref>) is that we can apply the following proposition to the first term on the right-hand side. It shows that when Q_i(1/G_ii) is averaged in i, there are certain cancellations taking place such that the average has a smaller order than Q_i (1/G_ii) = O(Λ).The first statement of this type was proved for generalized Wigner matrices in <cit.>. The complete proof in our setup will be presented in Section <ref>. [Fluctuation Averaging]Let Φ be a deterministic control parameter such that 0< Φ≤ N^-.IfFA assumption max_i , j1/m_iG_ij-δ_ij≺Φ,then for any deterministic c_1, …, c_N ∈^K× K satisfying max_i c_i≤ 1 we haveFA bound 1/N∑_i=1^N c_iQ_i 1/G_iim_i ≺max1/√(N),ΦΦ . Note that the assumption (<ref>) directly implies (<ref>). Moreover, (<ref>) yields (Q_i 1/G_ii)( G_ii-m_i) ≤ Q_i( 1/G_ii m_i-𝕀)^-1Λ ≺ Φ^2.Thus, we obtain from (<ref>) and (<ref>)the relation·(-)≺(^-1)^*(1/N∑_i=1^N c_i Q_i1/G_ii m_i + Φ^2 +max_i=1^N (d_i^(7) + d_i^(8))G_ii + Λ^2 ),where =(c_1, …, c_N) ∈(^K× K)^N is a multiple of ^*(^-1)^*[] and ≤ 1. From this estimate, we now conclude (<ref>).Since (<ref>) is satisfied by (<ref>)the bound (<ref>) implies that the first term on the right-hand side of (<ref>)is controlled by the right-hand side of (<ref>). For the third term, we use(<ref>) and G_ii≤ + Φ/^-1as well as Φ≤ 1 ≤^-1.Hence, (<ref>) concludes the proof of (<ref>) and Lemma <ref>.§.§ No eigenvalues away from self-consistent spectrumWe now state and prove our result for Hermitian Kronecker matrices , Theorem <ref> below. The theorem has two parts.For simplicity, we state the first part under the condition that =∑_i a_i ⊗ E_ii is bounded. We relax this condition in the second part for the purpose of our main result, Theorem <ref>.In this application, =^ζ=∑_i a_i^ζ⊗ E_ii, where a_i^ζ are given in (<ref>), and we need to deal with unbounded ζ as well. We recall that =(m_1, …, m_N) is the unique solution of (<ref>) with positive imaginary part. Moreover, the function ρ→_+ was defined in (<ref>), the set in Definition <ref> and (z,). We denote Ez and η z. For a matrix , we write () to denote its smallest singular value. Fix K∈.Let =∑_i=1^N a_i ⊗ E_ii be a Hermitian matrixandbe a Hermitian Kronecker random matrix as in (<ref>)such that (<ref>),(<ref>) and (<ref>) are satisfied. * Assume thatis bounded, i.e.,≤κ_4. Thenthere is a universal constant δ >0 such thatfor each D>0, there is a constant C_D >0 such that ( ()⊂{τ∈(τ, ) ≤ N^-δ}) ≥ 1 - C_D/N^D. * Assume now only the weaker bound = max_i=1^N a_i≤ N^κ_7Letbe defined through{ w ∈(w,) ≥ 2 ^1/2 +1, -w𝕀_2/(-w𝕀)≤κ_9 }.Then for each D >0, there is a constant C_D >0 such that ( ()∩ =∅) ≥ 1 - C_D/N^D. The constants C_D in (<ref>) and (<ref>)only depend on K, κ_1, (φ_p)_p≥ 3, α_*, κ_4, κ_7and κ_9 in addition to D. We will prove Theorem <ref> as a consequence of the following Lemma <ref>. This lemma is a type of local law.Its general comprehensive version, Lemma <ref> below, is a standard application of Lemma <ref>, Lemma <ref> andProposition <ref>. For the convenience of the reader, we will givean outline of the proof in Appendix <ref>.We also consider κ_7, κ_8, κ_9from (<ref>) and (<ref>) below, respectively, as model parameters. Fix K∈. Let κ_7>0 and=∑_i=1^N a_i ⊗ E_ii be a Hermitian matrix such that (<ref>) holds true. Letbe a Hermitian Kronecker random matrix as in (<ref>)such that (<ref>),(<ref>) and (<ref>) are satisfied. We define {w ∈(w, ) ≤ 2 ^1/2 +1, _2 ≤κ_8 }, { w ∈(w,) ≥ 2 ^1/2 +1, -w𝕀_2/(-w𝕀)≤κ_9 }. Then there are p∈ and P∈ independent of N and the model parameters such that 1/N∑_i=1^N(G_ii(z) - m_i(z))≺max{ 1, 1/[P]}(1/N+ 1/(Nη)^2)for any z=E+η∈∪ such that E≤ N^κ_7+1 and η≥ N^-1+γ(1+[-p]).We remark that sinceis Hermitian, if _2 is bounded, then the second condition in (<ref>) is automatically satisfied (perhaps with a larger κ_9), given the first one. So for _2≤κ_8, alternatively, we could havedefined the sets{w ∈(w, ) ≤ 2 ^1/2 +1 },{ w ∈(w,) ≥ 2 ^1/2 +1 }.If _2 does not have an N-independent bound, then we could havedefined ∅ andas in (<ref>).The estimate (<ref>) holds as stated with these alternative definitions ofand. (Overwhelming probability) We say that an event A^(N) happens asymptotically with overwhelming probability, a.w.o.p., if for each D>0 there is C_D>0 such that for all N ∈, we have(A^(N)) ≥ 1- C_D/N^D.From (<ref>), we conclude the crude bound max_λ∈λ^2 ≤(^2) = ∑_i,j=1^N h_ij^2 ≺ (1 +^2) N.Therefore, there are a.w.o.p. no eigenvalues ofoutside of [-a,a] with a(1+)√(N). We introduce the set A_δ{ω∈(ω, ) ≥ N^-δ} for δ>0.The previous argument proves that there are no eigenvalues in A_δ∖[-a,a] for any δ >0. For the opposite regime,i.e. to showthat A_δ∩ [-a,a] does not contain any eigenvalue ofa.w.o.p. with some small δ>0, we use the following standard lemma and will include a prooffor the reader's convenience at the end of this section.Letbe an arbitrary Hermitian random matrix and (z)(-z𝕀)^-1 its resolvent at z∈. Let Φ→_+ be a deterministic (possibly N-dependent) control parameter such that 1/N(τ +η_0)≺Φ(τ +η_0)for some τ∈ and η_0> 0.* If (Nη_0)^-1≥ N^Φ(τ+η_0) for some >0then() ∩ [τ-η_0, τ + η_0] = ∅ a.w.o.p. * Let ℰ{τ∈ [-N^C,N^C](Nη_0)^-1≥ N^Φ(τ +η_0) } for some C>0 and >0. Furthermore,suppose that η_0≥ N^-c for some c>0 and (<ref>) holds uniformly for allτ∈ℰ.Then () ∩ℰ = ∅ a.w.o.p. We now finish the proof of Theorem <ref>. In fact, by (<ref>) we have a ≲ N^κ_7+1/2, thus we work in the regime |E|≤ N^κ_7+1.We choose Φ(z) ρ(z)+ max{1, [-P]}(1/N + 1/(N z)^2) andη_0 N^-2/3.For small enough δ and γ, we can assume that η_0 ≥ N^-1+γ(1+ (τ+η_0,)^-p) for (τ, ) ≥ N^-δ. Consider first the case when _2≤κ_4, thenandare complements of each other,see the remark at (<ref>), and then(<ref>) is satisfied by (<ref>)for any τ with |τ|≤ N^κ_7+1.Moreover, owing to (<ref>), we haveΦ(E +η_0) ≲ N^2δ/N^2/3 + N^Pδ(1/N + 1/N^2/3)for all E ∈ A_δ∩[-a,a]. Therefore, by possibly reducing δ>0 and introducing a sufficiently small >0, we can assume N^Φ(E +η_0) ≤ N^-1/3=(Nη_0)^-1.Thus, from Lemma <ref> we infer thatdoes not have any eigenvalues in A_δ∩ [-a,a] a.w.o.p.Combined with the argument preceding Lemma <ref>, which excludes a.w.o.p. eigenvalues ofin A_δ∖[-a,a], this proves (<ref>)if _2≤κ_4. Under the weaker assumption _2≤ N^κ_7 the same argument works but only for E∈ since (<ref>) was proven only in this regime. For the proof of part (i), we compute 1/N(τ +η) = 1/N∑_i η/(λ_i - τ)^2 +η^2 .Estimating the maximum from above by the sum, we obtain from the previous identity and the assumption that1/Nmax_i η_0/(λ_i - τ)^2 +η_0^2 ≺Φ≤N^-/Nη_0.We conclude that min_i λ_i - τ≥η_0 a.w.o.p.and hence (i) follows. The part (ii) is an immediate consequence of (i) and a union bound argument using the Lipschitz-continuity in τ on ℰ of the left-hand side of (<ref>) with Lipschitz-constant bounded by N^3(C+c) and the boundedness of ℰ, i.e., ℰ⊂ [-N^C, N^C].§ FLUCTUATION AVERAGING: PROOF OF PROPOSITION <REF>In this section, we prove the Fluctuation Averaging which was stated as Proposition <ref> in the previous section. We fix an even p ∈ and use the abbreviationZ_i c_iQ_i 1/G_iim_i .We will estimate the p-th moment of 1/N∑_i Z_i.For a p-tuple i=(i_1, …, i_p) ∈{1, …,N}^p we call a label i_l a lone label if it appears only once in i. We denote by J_L all tuples i∈{1, …,N}^p with exactly L lone labels. Then we have FA start1/N∑_i=1^N Z_i^p ≤ 1/N^p ∑_L=0^p ∑_i ∈J_LZ_i_1…Z_i_p/2 Z_i_p/2+1…Z_i_p .For i∈ J_L we estimate FA estimateZ_i_1…Z_i_p/2 Z_i_p/2+1…Z_i_p ≺Φ^p+L.Before verifying (<ref>) we show this bound is sufficient to finish the proof. Indeed, using J_L≤ C(p)N^(L+p)/2 and (<ref>) in (<ref>) yields1/N∑_i=1^N Z_i^p ≺ ∑_L=0^p N^-(p-L)/2Φ^p+L ≺ max1/√(N),ΦΦ^p.This implies (<ref>). The rest of the proof is dedicated to showing (<ref>). Since the complex conjugates do not play any role in the following arguments, weomit them in our notation.Furthermore, by symmetry we may assume that {i_1, …, i_L} are the lone labels in i.We we fix ℓ∈{0, …,L} and l ∈{1, …,p}. For any K ∈_0 we call a pair (t,T)witht = (t_1, …, t_K-1),T = (T_0, T_01, T_1, T_12,…,T_K-1, T_K-1K, T_K ) ,an l-factor (at level ℓ) if for all k ∈{1, …,K-1} and all k' ∈{1, …,K-2} the entries of the pair satisfy l-factor conditiont_k ∈{i_1, …, i_ℓ},T_k, T_k' k'+1 ⊆{i_1, …, i_ℓ} , t_k' t_k'+1 ,t_k ∉T_k ,t_k', t_k'+1 ∉T_k'k'+1 ,t_1 ≠i_l ,,t_K-1 ≠i_l,i_l ∉T_0 ∪T_K+1 .Then we associate to such a pair the expression l-factor expression Z_t,T c_i_lQ_i_l 1/G_i_li_l^T_0G_i_lt_1^T_01 1/G_t_1t_1^T_1G_t_1t_2^T_12 1/G_t_2t_2^T_2…1/G_t_K-1t_K-1^T_K-1G_t_K-1i_l^T_K-1 K 1/G_i_li_l^T_K m_i_l .In particular, for K=0 we haveZ_∅,(T_0) c_i_lQ_i_l1/G_i_li_l^T_0m_i_l , Z_∅,(∅) Z_i_l .We also calld(t,T) K ,the degree of the l-factor (t,T).By induction on ℓ we now prove the identity Expanded formulaZ_i_1…Z_i_p = ∑_(t,T)∈I_ℓ(±)Z_t_1,T_1…Z_t_p,T_p ,where the sign (±) indicates that each summand may have a coefficient +1 or -1 and the sum is over a set I_ℓ that contains pair of p-tuples t=(t_1, …, t_p) and T=(T_1, …, T_p) such that (t_l,T_l) for all l=1, …,p is an l-factor at level ℓ. Furthermore, for all ℓ∈{0, …,L} the size of I_ℓ and the maximal degree of the l-factors (t_l,T_l) are bounded by a constant depending only on p andtotal degree lower bound ∑_i=1^pmax{1,d(t_l,T_l)}≥p+ℓ ,(t,T)∈I_ℓ .The bound (<ref>) follows from (<ref>) and (<ref>) for ℓ=L becausebound on l-factor Z_t,T ≺Φ^max{1,d(t,T)},for any l-factor (t,T). We postpone the proof of (<ref>) to the very end of the proof of Proposition <ref>.The start of the induction for the proof of (<ref>) is trivial since for ℓ=0we can chose the set I_ℓ to contain only one element with (t_l,T_l)=(∅, (∅)) for all l=1, …,p. For the induction step, suppose that (<ref>) and (<ref>) have been proven for some ℓ∈{1, …, L-1}. Then we expand all l-factors (t_l,T_l) with l ℓ+1 within each summand on the right hand side of (<ref>) in the lone index i_ℓ+1 by using the formulas G^T_ij= G_ij^T∪{k}+G_ik^T1/G_kk^TG_kj^T , i,j∉{k}∪ T , 1/G_ii^T= 1/G^T∪{k}_ii-1/G_ii^TG_ik^T1/G_kk^TG_ki^T 1/G^T ∪{k}_ii , i∉{k}∪ T , for k=i_ℓ+1. More precisely, for all l ≠ℓ+1 we use (<ref>) on each factor on the right hand side of (<ref>) with (t,T)=(t_l,T_l); (<ref>) for the off-diagonal and (<ref>) for the inverse diagonal resolvent entries. Multiplying out the resulting factors, we write Z_t_1,T_1… Z_t_p,T_p as a sum of 2^∑_l ≠ℓ+12d(t_l,T_l)+1summands of the formsummand after expandingZ_t_1,T_1…Z_t_p,T_p ,where for all l=1, …, p the pair (t_l,T_l) is an l-factor at level ℓ+1. Note that we did not expand the ℓ+1-factor Z_t_ℓ+1, T_ℓ+1. In particular, the only nontrivial conditions for (t_l,T_l) to be an l-factor at level ℓ+1 (cf. (<ref>)), namely t_k t_k+1, t_1 i_ℓ+1 and t_K-1 i_ℓ+1, are satisfied because i_ℓ+1 does not appear as a lower index on the right hand side of (<ref>) when on the left hand side (t,T)=(t_l,T_l).Moreover all but one of the summands (<ref>) satisfy ∑_i=1^pd(t_l,T_l)≥p+ℓ+1 ,because the choice of the second summand in both (<ref>) and (<ref>) increases the number of off-diagonal resolvent elements in the l-factor that is expanded.The only exception is the summand (<ref>) for which in the expansion in all factors always the first summand of (<ref>) and (<ref>) is chosen. However, in this case all Z_t_l,T_l with lℓ+1 are independent of i_ℓ+1 because this lone index has been completely removed from all factors. We conclude that this particular summand vanishes identically. Thus (<ref>) holds with ℓ replaced by ℓ+1 and the induction step is proven.It remains to verify(<ref>). For d(t,T)=0 we use that K=0 estimate on ZQ_i_l1/G_i_li_lm_i_l ≤1/G_i_li_lm_i_l-𝕀 ≺Φ ,1/G_i_li_l^Tm_i_l- 1/G_i_li_lm_i_l≺Φ^2 .The first bound in (<ref>) simply uses the assumption (<ref>) while the second bound uses the expansion formulas (<ref>) and (<ref>). For K=d(t,T)>0 we realize that K encodes the number of off-diagonal resolvent entries G^T_ij in (<ref>).In the factors of(<ref>) we insert the entries of M so that (<ref>) becomes usable, i.e. we use1/G_t_kt_k^T_kG_t_kt_k+1^T_k k+1 = 1/G_t_kt_k^T_km_t_k1/m_t_kG_t_kt_k+1^T_k k+1 .Then similarly to (<ref>) we use1/m_t_kG_t_kt_k+1 ≺ Φ ,1/m_t_kG_t_kt_k+1^T_k k+1-1/m_t_kG_t_kt_k+1 ≺ Φ^2 , where again the first bound follows from (<ref>) and the second bound from (<ref>) and (<ref>). § NON-HERMITIAN KRONECKER MATRICES AND PROOF OF THEOREM <REF> Since () ⊂_() (cf. (<ref>))for all >0, Theorem <ref>clearly follows from the following lemma. Under the assumptions of Theorem <ref>, we have that for each ∈(0,1], Δ>0 and D>0, there is a constant C_,Δ,D >0 such that ( _() ⊂_+Δ) ≥ 1- C_,Δ,D/N^D. Let ^ζ be defined as in (<ref>). Note that ζ∈_() if and only if (0,(^ζ)) ≤. We set def of wt A ∑_i=1^N a_i ⊗E_ii. We first establish that _() is contained in D(0,N){ w ∈w≤ N} a.w.o.p.Similarly, as in (<ref>), using an analogue of (<ref>)forinstead of , we getmax_ζ∈ζ^2 ≤(^*) = ∑_i,j=1^N ((P_ij)^* (P_ij))≲∑_i,j=1^N P_ij^2 ≺ (1+ ^2)N.Thus, all eigenvalues ofhave a.w.o.p. moduli smaller than (1+)√(N)≤ N. The above characterization of _() and ≤ 1 yield _() ⊂ D(0,N) a.w.o.p.We now fix an ∈ (0,1] and for the remainder of the proof the comparison relation ≲ is allowed to depend on without indicating that in the notation. In order to show that the complement of _() contains _+Δ^c ∩ D(0,N) a.w.o.p. we will apply Theorem <ref>to ^ζ for ζ∈_+Δ^c ∩ D(0,N). In particular, here we have = ^ζ∑_i a_i^ζ⊗ E_ii ,where a_i^ζ is defined as in (<ref>). Now, we conclude that (^ζ)∩ [--Δ/2,+Δ/2] = ∅ a.w.o.p. for each ζ∈_+Δ^c∩ D(0,N). If ζ is bounded, hence ^ζ is bounded, we can use(<ref>) and we need to show that[--Δ/2,+Δ/2] ⊂{τ∈(τ, ^ζ) ≥ N^-δ} but this is straightforward since ζ∈_+Δ^c implies (0, ^ζ)≥+Δ by its definition.For large ζ we use part (ii) of Theorem <ref> and we need to showthat[--Δ/2,+Δ/2]+η⊂ for any small η.Take z∈ with |z|≤+Δ/2.If |ζ| ≥ + 2^1/2 +2, then (z,(^ζ))≥ 2^1/2 +1, so the first condition in the definition (<ref>) ofis satisfied. The second condition is straightforward since for large ζ and small z, both ^ζ-z𝕀_2 and(^ζ-z𝕀) are comparable with |ζ|.Hence, Theorem <ref> is applicable and we conclude that(^ζ)∩ [--Δ/2,+Δ/2] = ∅ a.w.o.p. for all ζ∈_+Δ^c. If λ_1(ζ)≤…≤λ_2LN(ζ) denote the ordered eigenvalues of ^ζ then λ_i(ζ) is Lipschitz-continuous in ζby the Hoffman-Wielandt inequality.Therefore, introducing a grid in ζ and applying a union bound argument yieldsup_ζ∈_+Δ^c∩ D(0,N)(0,(^ζ)) ≤ a.w.o.p.Since ζ∈_() if and only if (0,(^ζ))≤ we obtain _()∩_+Δ^c ∩ D(0,N) = ∅ a.w.o.p.As we proved _()∩ D(0,N)^c = ∅ a.w.o.p. before this concludes the proof ofLemma <ref>. § AN ALTERNATIVE DEFINITION OF THE SELF-CONSISTENT -PSEUDOSPECTRUMInstead of the self-consistent -pseudospectrum _ introduced in (<ref>) one may work with the deterministic set _ from (<ref>) when formulating our main result, Theorem <ref>. The advantage of the set _ is that it only requires solving the Hermitized Dyson equation (<ref>) for spectral parameters z along the imaginary axis. The following lemma shows that _ and _ are comparable in the sense thatfor anywe have __1⊆_⊆__2 for certain _1,_2. Letbe the solution to the Hermitized Dyson equation (<ref>) and suppose Assumptions <ref> are satisfied.There is a positive constant c, depending only on model parameters, such that for any ∈ (0,1) we have the inclusions_ ⊆ _√() ,_c^27 ⊆ _ ,where _ is the self-consistent -pseudospectrum from (<ref>) and _ is defined in (<ref>).The inclusion _⊆_√() is trivial because m_j^ζ is the Stieltjes transform of v_j^ζ. So we concentrate on the inclusion _c^27⊆_.We fix ζ∈∖_ and suppress it from our notation in the following, i.e. =^ζ,v_j=v_j^ζ, etc. Recall that by assumption we have (cf. (<ref>))≲1 .Since any large enough ζ is contained in both sets ∖_ and ∖_ by (<ref>) and the upper bound in (<ref>), we may assume that ζ≲ 1. We use the representation of m_i as the Stieltjes transform of v_i and that v_i has bounded support to seexm_i(z)y ≤ 1/2∫_xv_i(τ)x+yv_i(τ)y/τ-z ≲ 1/ηx m_i(z)x+y m_i(z)y ,for any x,y ∈^K, where K=2L.In particular bound of m in terms of im m m_i(z) ≲m_i(z)/η .Fix an η∈ (0,1) for which the inequality eta inequality for wt D eps 1/η(η) ≤2/holds true. Since ζ∈∖_ such an η can be chosen arbitrarily small. Then we have eps bounds (η) ≲1/ ,(η)^-1 ≲ 1/ ,η ≲ m_i(η) ≲η/ .The first inequality follows from (<ref>) and (<ref>), the second inequality from (<ref>) and the third from (<ref>) and the bounded support of v_i.In particular, by the formula (<ref>) for the norm ofwe haveeps stability 1-(η) ≳^4 .To see (<ref>) we simply follow the calculation in the proof of Lemma <ref> but instead of using the bounds (<ref>), (<ref>) and (<ref>)onand ^-1 and m_i we use (<ref>). Similarly we findC_C_^-1 ≲ 1/^3 ,C_√()C_√()^-1 ≲ 1/ .By (<ref>) we conclude ^-1_sp ≲ 1/^8 .Using (<ref>) and the bound onin (<ref>)we improve this bound on the ·_sp-normto a bound on the ·-norm,^-1 ≲ 1/^12 .We are therefore in the linear stability regime of the Dyson equation andfrom the stability equation (cf. (<ref>)) for the difference Δ (z)-(η), i.e. from stability equation for d=mz L[Δ] =(z-η)(η)^2 + 1/2(η)S[Δ]Δ+ΔS[Δ](η),we infer (z)-(η) ≲ ^-1^2z-η ≲ z-η/^14 ,for any z ∈ with z-η ≤ C/^-1^2^3 ≲ ^27 ,where C∼1 is a constant depending only on model parameters. Note that in (<ref>) we symmetrized the quadratic term in Δ which can always be done since every other term of the equation is invariant under taking the Hermitian conjugate. In fact, we see thatcan be extended analytically to an ^27-neighborhood of η. Since η can be chosen arbitrarily small we find an analytic extension ofto all z ∈ with z≤ c^27 for some constant c ∼ 1. We denote this extension by the same symbol =(m_1, …,m_N) as the solution to the Dyson equation.By definition of _ we have m_i(0)=0 and it is easy to see by the following argument that for any z ∈ the imaginary part still vanishes as long as we are in thelinear stability regime. Thus ρ^ζ([-c^27,c^27])=0: The stability equation (<ref>) evaluated at η=0 and z ∈ is an equation on the space {Δ∈ (^K × K)^N: Δ_i^* =Δ_i, i=1, …,N}, i.e. for any Δ in this space both sides of the equation remain inside this space. Thus by the implicit function theorem applied within this subspace of (^K × K)^N we conclude that the solution to (<ref>) satisfies Δ=Δ^*, or equivalently Δ=0, for z ∈ inside the linear stability regime. Since ρ^ζ([-c^27,c^27])=0 we thus obtain ζ∈∖_c ^27 which yields the missing inclusion.§ PROOFS OF THEOREM <REF> AND LEMMA <REF>For the reader's convenience,we now state and prove the local law for , Lemma <ref> below. Its first part is designed for all spectral parameters z, where the Dyson equation, (<ref>), is stable and its solutionis bounded; here thelocal law holds down to the scaleη = z ≥ N^-1+γ that is optimal near the self-consistent spectrum. The second part is valid away from the self-consistentspectrum; in this regime the Dyson equation is alwaysstable and the local law holds down to the real line, however the dependence of ourestimate on the distance from the spectrum is not optimized. For the proof of Lemma <ref>, the second partis sufficient, but we also give the first part for completeness.For simplicity we state the first part under thecondition that =∑_i a_i ⊗ E_ii is bounded;in the secondpartwe relax this condition to include the assumptions of Lemma <ref>. From now on, we will also consider κ_4, …, κ_9from (<ref>), (<ref>), (<ref>) and (<ref>)below, respectively, as model parameters.Fix K∈.Let =∑_i=1^N a_i ⊗ E_ii be a deterministic Hermitian matrix. Letbe a Hermitian random matrix as in (<ref>)satisfying Assumptions <ref>, i.e., (<ref>), (<ref>) and (<ref>) hold true.* (Stable regime)Let γ, κ_4, κ_5, κ_6>0. Assume that _2≤κ_4 anddefine{w ∈sup_s ≥ 0 (w+ s)≤κ_5,sup_s≥ 0^-1(w+ s) ≤κ_6 andw≥ N^-1+γ }. Then, we havemax_i,j=1^NG_ij(z) -m_i(z)δ_ij≺ 1/1 + η√((z)/Nη) +1/(1 + η^2) √(N) + 1/(1+η^2) Nηuniformly for z ∈.Moreover,if c_1, …, c_N ∈^K× K are deterministic and satisfy max_i=1^Nc_i≤ 1 then we have1/N∑_i=1^N [c_i( G_ii(z) - m_i(z) )] ≺1/1+η(1/Nη +1/N)uniformly for z ∈. * (Away from the spectrum) Let κ_7, κ_8, κ_9 >0 be fixed.Assume that (<ref>) holds true andandare defined as in (<ref>).Then there are universalconstants δ>0 and P ∈ such thatmax_i,j=1^NG_ij(z) -m_i(z)δ_ij≺ max{1/[2], 1/[P]}1/√(N)uniformly for z ∈(∩{ w ∈(w) ≥ N^-δ}) ∪.Moreover, if c_1, …, c_N ∈^K× K are deterministicand satisfy max_i=1^Nc_i≤ 1 then we have1/N∑_i=1^N [c_i( G_ii(z) - m_i(z) )] ≺max{1/[2], 1/[P]}1/Nuniformly for z ∈(∩{ w ∈(w) ≥ N^-δ})∪.The local laws(<ref>) and (<ref>) hold as stated withthe alternative definitions of the setsandgiven after Lemma <ref>.Letbe the unique solution of (<ref>) with positive imaginary part, where α_μα_μ,β_ν 2β_ν = β_ν + γ_ν^* and a_j a_j.Defining ρ_N as in (<ref>), it is now a standard exercise to obtain (<ref>)from (<ref>),since z ↦ (NL)^-1((_N-z)^-1) is the Stieltjes transform of μ__N.We start with the proof of part (i). For later use, we will present the proof for all spectral parameters z in a slightly larger set than , namely in the set' {w ∈sup_s ≥ 0 (1+ -w- s_2)(w+ s)≤κ_5,sup_s≥ 0^-1(w+ s) ≤κ_6andw≥ N^-1+γ}.Under the condition ≤κ_4, it is easy to see ⊂'perhaps with somewhat larger κ-parameters.Furthermore, we relax the condition _2≤κ_4 to _2≤ N^κ_7 with some positiveconstant κ_7. We also restrict our attention to the regime |E|≤ N^κ_7+1 since the complementary regimewill be covered by the regime (<ref>) in part (ii).Let φ and ψ be defined as in part (iii) of Lemma <ref> and recall the definition of ϑ from (<ref>).Proof of (<ref>): We first show that Λ(E+η) ≺φuniformly for E +η∈' and |E|≤ N^κ_7+1. We start with some auxiliary estimates.By the definition of ' in (<ref>) and setting (a_1, …, a_N), we have(z) ≲ 1/ 1 + -z ≲1,uniformly for z ∈'. We remark that =. We now verify that, uniformly for z ∈', we have(z)^-1(z)≲ 1.Applyingto (<ref>) as well as using (<ref>) and (<ref>), we get that^-1(z) ≲ -z + 1 ≲ 1 + z +for z ∈'. Thus, combining the first bounds in (<ref>) and in (<ref>) yields (<ref>). From the definition of ' in (<ref>),using (<ref>), (<ref>) and (<ref>), we obtain ^-1 ≲1, (^-1)^* ≲ 1,where the adjoint is introduced above (<ref>).We will now usepart(iii) of Lemma <ref> to prove (<ref>). To check the condition ψ(η) ≤ N^-δ in that lemma,we use (<ref>), (<ref>) and (<ref>) to obtainψ(η) ≲ 1/(Nη). Hence, ψ(η) ≤ N^-γ/2 for η≥ N^-1+γ and we choose δ =γ/2 in (<ref>). We now estimate φ and ϑ in our setting.From (<ref>), (<ref>) and (<ref>), we conclude that φ≲Ψ, where we introduced the control parameterΨ√(/Nη) + /√(N) + /Nη .We note that the factoris kept in the bound φ≲Ψ and the definition of Ψ to control ^-1 factorsvia (<ref>) later and to trackthe correct dependence of the right-hand sides of (<ref>)and (<ref>) on η. For the second purpose, we will use the following estimate.Combined with (<ref>), the bound (<ref>) yields ≲ 1/1 + .For ϑ, we claim that ϑ≳ (1+ z+ )^-1, ϑ≳.Indeed, for the first bound, we apply (<ref>), (<ref>), (<ref>) andthe second bound in (<ref>) to the definition of ϑ, (<ref>). Using (<ref>) instead of (<ref>) and (<ref>) yields the second bound. Now, to prove (<ref>), we show that (Λ≤ϑ) =1 a.w.o.p. for η≥ N^-1+γ on the left-hand side of (<ref>).The first step is to establish Λ≤ϑ for large η. For η≥max{1,E, }, we have Λ≺η^-2 by (<ref>).By (<ref>), we have ϑ≳η^-1 for η≥max{1,E, }.Therefore, there is κ>κ_7 +1 such that Λ (η) ≤ϑ(η) a.w.o.p. for all η≥ N^κ.Together with (<ref>), this proves (<ref>) for η≥ N^κ. The second step is a stochastic continuity argument to reduce η for the domain ofvalidity of(<ref>).The estimate (<ref>) asserts that Λ cannot take on any value between φ and ϑwith very high probability. Since η↦Λ(η) is continuous, Λ remains bounded by φ for all values of η as long as φ is smaller than ϑ. The precise formulationof this procedure is found e.g. in Lemma A.2 of <cit.> and we leave the straightforwardcheck of its conditions to the reader. The bound (<ref>) yields (<ref>) in the regime |E|≤ N^κ_7+1. Proof of (<ref>): We apply Lemma <ref> with Φ^-1φ. The condition (<ref>) is satisfied by the definition of Φ and (<ref>).Since Φ≲Ψ it is easily checked that all terms on the right-hand side of(<ref>) are bounded by max{N^-1/2, Ψ}Ψ. Therefore, using (<ref>) and (<ref>), the averaged local law, (<ref>), yields 1/N∑_i=1^N c_i(G_ii -m_i )≺max{1/√(N),Ψ}Ψ≲1/1 + ((z)/Nη+ 1/N + 1/1 + [2]1/(Nη)^2)for any c_1, …, c_N ∈^K× K such that max_i c_i≤ 1. Owing to ≲ 1 by (<ref>), the bound (<ref>) follows. We now turn to the proof of (ii) whichis divided into two steps.In the first step, we show Lemma <ref>. Therefore, we will follow the proof of (<ref>) with the bounds (<ref>) and (<ref>) replaced bytheir weaker analogues (<ref>) and (<ref>) below that deteriorate asbecomes small. After having completed Lemma <ref>, we immediately get Theorem <ref>via the proof given in Section <ref>.Finally, in the second step, proceeding similarly as in the proof of (i), the bounds (<ref>) and (<ref>) will be obtained from Theorem <ref>.We first give the replacements for the bounds (<ref>) and (<ref>)that served as inputs for the previous proof of part (i). The replacement for (<ref>) is a direct consequence of (<ref>):≤1/.The replacement of (<ref>) is the bound ^-1 + (^-1)^*≲ 1 + 1/[26], which is obtained by distinguishing the regimes ^2> 1/2 and ^2 ≤ 1/2.In the first regime, we conclude from (<ref>) and (<ref>) that ^-1 + (^-1)^*≲ 1 + ^2 + ^9^-1^9/^4[8]≲ 1 + 1/[26],where we used the lower bound ongiven by the definition of the regime and ≲ 1 as well as the bound ^-1≲1/ [2] that is proven as (<ref>) below. In the second case, we use the simple bound^-1 + (^-1)^*≤ 2/(1- ^2 ) ≤ 4.Thus, (<ref>) yields (<ref>). Next, we will check that the following weaker version of (<ref>) holds (z + s )^-1(z + s )≲ 1+ 1/d^2_ρ(z+ s)for all z ∈∪ and s ≥ 0. This is straightforward for z ∈ sincein this case |z|, _2 andall remain bounded (see (<ref>)), so similarly to (<ref>) we have ^-1(z + s )≲ 1+ s +(z + s ). For |s|≤ C (<ref>) directly follows from (<ref>), while for large s we have (z + s )≲ s^-1 and ^-1(z + s )≲ s, so (<ref>) also holds.Suppose now that z ∈. In this regime z is far away from the spectrum of , soby (<ref>) we know that ( z +s, A)∼( z+s, )≥ 1. This means that (z+ s)≲1/( z+s, ) ∼ 1/( z+s, A) = 1/(-(z+ s)𝕀), andhence from the Dyson equation1/(z+ s)≤-(z+ s)𝕀_2+ ≲-(z+ s)𝕀_2.Sinceis Hermitian, we have the bound-(z+ s)𝕀_2/(-(z+ s)𝕀)≤-z𝕀_2/(-z𝕀)≤κ_9 for any s≥ 0, where the first inequalitycomes from the spectral theorem and thesecond bound is fromthe definition of . Therefore (-(z+ s)𝕀)∼-(z+ s)𝕀_2, andthus(<ref>) follows from (<ref>) and (<ref>). Now we can complete Step 1 by following the proof of part (i) but using (<ref>), (<ref>)and (<ref>)instead of (<ref>), (<ref>) and(<ref>), respectively.It is easy to see that onlythese three estimates on , ^-1 and ^-1 were used as inputs in this argument.Theresultingestimates are weaker bymultiplicative factors involvingcertain power of 1+ 1/.We thus obtaina version of (<ref>) for η≥ N^-1+γ(1+[-p]) with (1+)^-1 replaced by max{1,[-P]} for some explicit p, P ∈.Thus, applying (<ref>) to estimatein (<ref>) instead of ≲ 1 and possibly increasing P yields (<ref>). Step 2: Continuing the proof of part (ii) of Lemma <ref>, we draw two consequences from Theorem <ref> and the fact thatis the Stieltjes transformof a positive semidefinite matrix-valued measure V_ supported onwith V_() = 𝕀.Let δ>0 be chosen as in Theorem <ref>. Since the spectrum ofis contained in {ω∈(ω,) ≤ N^-δ} a.w.o.p. by Theorem <ref>, we have≲1/, ≲η/[2]𝕀a.w.o.p. for all z∈ satisfying ≥ N^-δ/2.Therefore, (<ref>) implies for all z∈ satisfying ≥ N^-δ/2 that Λ_hs + Λ_w≺1/√(N).Sinceis the Stieltjes transform of V_ defined in (<ref>) and V_() = 𝕀 andis the Stieltjes transform of V_we conclude that there is κ>0 such that Λ≲-≲z^-2a.w.o.p.uniformly for all z∈ satisfying z≥ N^κ. Here, we used that V_⊂ and hence ( V_) ≲ N^κ_7+1 by(<ref>) and (<ref>) as well as( V_) ≤() ≲ N^κ_7+1 a.w.o.p. by Theorem <ref>.Hence, owing to (<ref>) and (<ref>), by possibly increasing κ>0, we can assumethat Λ≤ϑ a.w.o.p. for all z∈∪ satisfying z≥ N^κ.Thus, to estimate - we start from (<ref>) and use (<ref>), (<ref>), (<ref>)and (<ref>) to obtain an explicit P ∈ such that -≺max{[-1], [-P]} N^-1/2 a.w.o.p.For the offdiagonal terms of , we apply (<ref>) to (<ref>).This yieldsΛ≺max{1/,1/[P]}1/√(N) for z∈∪ satisfying z≥ N^κ. Employing the stochastic continuity argument from Lemma A.2 in <cit.> as before, we obtain (<ref>) for all z ∈∪ satisfying ≥ N^-δ/2.We use (<ref>) in (<ref>), replace P by P+1 and δ by δ/2. Thus, we have proven (<ref>)for all z∈∪ satisfying ≥ N^-δ. Notice that this argument covers the case |E|≥ N^κ_7+1 as well that was left open in Step 1. For the proof of (<ref>), we set Φ (√(N))^-1 and apply Lemma <ref>.Its assumption Λ≺Φ/^-1 is satisfied by (<ref>) and (<ref>). Using (<ref>), (<ref>), (<ref>)and (<ref>), this proves (<ref>) and hence concludes the proof of Lemma <ref>.10PhysRevE.91.012820 Y. Ahmadian, F. Fumarola, and K. D. Miller, Properties of networks with partially structured and partially random connectivity, Phys. Rev. E 91 (2015), 012820.Ajankirandommatrix O. Ajanki, L. Erdős, and T. Krüger, Universality for general Wigner-type matrices, Prob. Theor. Rel. Fields 169 (2017), no. 3-4, 667–727. 3719056AjankiCorrelated , Stability of the matrix Dyson equation and random matrices with correlations, Prob. Theor. Rel. Fields (2018), doi:10.1007/s00440-018-0835-z (Online first).Aljadeff2015 A. Aljadeff, D. Renfrew, and M. Stern, Eigenvalues of block structured asymmetric random matrices, J. Math. Phys. 56 (2015), no. 10, 103502.PhysRevLett.114.088101 J. Aljadeff, M. Stern, and T. Sharpee, Transition to chaos in random networks with cell-type-specific connectivity, Phys. Rev. Lett. 114 (2015), 088101.AltInhomCirc J. Alt, L. Erdős, and T. Krüger, Local inhomogeneous circular law, arXiv:1612.07776v3, 2016.AltGram , Local law for random Gram matrices, Electron. J. Probab. 22 (2017), no. 25, 41 pp.anderson2013 G. W. Anderson, Convergence of the largest singular value of a polynomial in independent Wigner matrices, Ann. Probab. 41 (2013), no. 3B, 2103–2181.anderson2010introduction G.W. Anderson, A. Guionnet, and O. Zeitouni, An introduction to random matrices, Cambridge Studies in Advanced Mathematics, Cambridge University Press, 2010.Bai2012 Z. Bai and J. W. Silverstein, No eigenvalues outside the support of the limiting spectral distribution of information-plus-noise type matrices, Random Matrices Theory Appl. 1 (2012), no. 1, 1150004, 44.Bai1986 Z. D. Bai and Y. Q. Yin, Limiting behavior of the norm of products of random matrices and two problems of Geman-Hwang, Prob. Theor. Rel. Fields 73 (1986), no. 4, 555–569.bai1988 , Necessary and sufficient conditions for almost sure convergence of the largest eigenvalue of a Wigner matrix, Ann. Probab. 16 (1988), no. 4, 1729–1741.Belinschi2016 S. T. Belinschi and M. Capitaine, Spectral properties of polynomials in independent Wigner and deterministic matrices, J. Funct. Anal. 273 (2017), no. 12, 3901 – 3963.Bordenave2016 C. Bordenave and M. Capitaine, Outlier eigenvalues for deformed i.i.d. random matrices, Comm. Pure Appl. Math. 69 (2016), no. 11, 2131–2194.Bordenave_spec_rad2016 C. Bordenave, P. Caputo, D. Chafaï, and K. Tikhomirov, On the spectral radius of a random matrix, arXiv:1607.05484, 2016.bordenave2012 C. Bordenave and D. Chafaï, Around the circular law, Probab. Surveys 9 (2012), 1–89.Bourgade2014 P. Bourgade, H.-T. Yau, and J. Yin, Local circular law for random matrices, Prob. Theor. Rel. Fields 159 (2014), no. 3-4, 545–595.Capitaine2007 M. Capitaine and C. Donati-Martin, Strong asymptotic freeness for Wigner and Wishart matrices, Indiana Univ. Math. J. 56 (2007), 767–804.Erdos2017Polynomials L. Erdős, T. Krüger, and Yu. Nemish, Local spectral analysis of polynomials in random matrices, In preparation, 2018.Erdos2017Correlated L. Erdős, T. Krüger, and D. Schröder, Random matrices with slow correlation decay, arXiv:1705.10661, 2017.EJP2473 L. Erdős, A. Knowles, H.-T. Yau, and J. Yin, The local semicircle law for a general class of random matrices, Elect. J. Probab. 18 (2013), no. 59, 1–58.EYYBern L. Erdős, H.-T. Yau, and J. Yin, Universality for generalized Wigner matrices with Bernoulli distribution, J. Comb. 2 (2011), no. 1, 15–82.geman1986 S. Geman, The spectral radius of large random matrices, Ann. Probab. 14 (1986), no. 4, 1318–1328.girko2012theory V. L. Girko, Theory of stochastic canonical equations: Volumes I and II, Mathematics and Its Applications, Springer Netherlands, 2012.Haagerup2005 U. Haagerup and S. Thorbjørnsen, A new application of random matrices: Ext(C^*_red(F_2)) is not a group, Ann. of Math. 162 (2005), no. 2, 711–775.Hastings1992 H. M. Hastings, F. Juhasz, and M. A. Schreiber, Stability of structured random matrices, Proceedings: Biological Sciences 249 (1992), no. 1326, 223–225.Helton01012007 J. W. Helton, R. Rashidi Far, and R. Speicher, Operator-valued semicircular elements: Solving a quadratic matrix equation with positivity constraints, Int. Math. Res. Notices 2007 (2007), Art. ID rnm086.Khoruzhenko1996 B. Khoruzhenko, Large-N eigenvalue distribution of randomly perturbed asymmetric matrices, J. Phys. A 29 (1996), no. 7, L165–L169.may1972will R. M. May, Will a large complex system be stable?, Nature 238 (1972), 413–414.PhysRevLett.97.188104 K. Rajan and L. F. Abbott, Eigenvalue spectra of random matrices for neural networks, Phys. Rev. Lett. 97 (2006), 188104.Tao2013 T. Tao, Outliers in the spectrum of iid matrices with bounded rank perturbations, Prob. Theor. Rel. Fields 155 (2013), no. 1, 231–263.tao2010 T. Tao, V. Vu, and M. Krishnapur, Random matrices: Universality of ESDs and the circular law, Ann. Probab. 38 (2010), no. 5, 2023–2065.Wigner1955 E. P. Wigner, Characteristic vectors of bordered matrices with infinite dimensions, Ann. of Math. 62 (1955), no. 3, 548–564.
http://arxiv.org/abs/1706.08343v3
{ "authors": [ "Johannes Alt", "Laszlo Erdos", "Torben Krüger", "Yuriy Nemish" ], "categories": [ "math.PR", "math-ph", "math.MP", "60B20, 15B52" ], "primary_category": "math.PR", "published": "20170626124013", "title": "Location of the spectrum of Kronecker random matrices" }
A. C. Rager 1,2,J. C. Dorelli 2, D. J. Gershman 2, V. Uritsky 1,2,L. A. Avanov 2,3,R. B. Torbert 4,5, J. L. Burch 5,R. E. Ergun 6,J. Egedal 7,C. Schiff 2, J. R. Shuster 2,3, B. L. Giles 2, W. R. Paterson 2, C. J. Pollock 8, R. J. Strangeway 9,C. T. Russell 9,B. Lavraud10,V. N Coffey 11,Y. Saito 12,1Catholic University of America, Washington DC, USA 2NASA Goddard Space Flight Center, Greenbelt, MD, USA 3University of Maryland, College Park, MD, USA 4University of New Hampshire, Durham, NH, USA 5Southwest Research Institute, San Antonio, TX, USA 6University of Colorado Boulder, Boulder, CO, USA 7University of Wisconsin, Madison, WI, USA 8Denali Scientific, Healy, AK 9University of California, Los Angeles, CA, USA 10Research Institute in Astrophysics and Planetology, Toulouse, France 11NASA Marshall Space Flight Center, Huntsville AL, USA 12Institute for Space and Astronautical Science, Sagamihara, Japan A.C. [email protected]* Diamagnetic drift explains out-of-plane current in regions of deviation from frozen-in flux* Perpendicular crescents exist in regions where electrons are diamagnetically drifting* New technique for extracting 7.5 ms electron moments produces reliable dataWe report Magnetospheric Multiscale observations of electron pressure gradient electric fields near a magnetic reconnection diffusion region using a new technique for extracting 7.5 ms electron moments from the Fast Plasma Investigation. We find that the deviation of the perpendicular electron bulk velocity from E × B drift in the interval where the out-of-plane current density is increasing can be explained by the diamagnetic drift. In the interval where the out-of-plane current is transitioning to in-plane current, the electron momentum equation is not satisfied at 7.5 ms resolution. § INTRODUCTION Magnetic reconnection is often invoked to explain the rapid conversion of magnetic energy into plasma energy in astrophysical and laboratory plasmas. In our solar system, magnetic reconnection is the primary mode by which the solar wind couples electrodynamically to magnetized bodies, producing open magnetic topologies and enabling the transport of mass, momentum and energy from the solar wind into planetary magnetospheres. While computer simulations have produced a wealth of predictions about the electron scale properties of reconnection <cit.>, there have been few direct measurements to test these predictions.The Magnetospheric Multiscale (MMS) mission was designed to study the basic physics of magnetic reconnection in Earth's magnetosphere, resolving both the fields and plasma on electron time scales for the first time <cit.>. MMS consists of four spacecraft flying in a close tetrahedral formation (nominal spacecraft separations are ∼ 10 km). The close formation and high quality of the MMS tetrahedron permits the accurate calculation of sub-ion scale spatial gradients, allowing for the first time a direct test of exact plasma fluid equations.The MMS Fast Plasma Investigation (FPI) uses a suite of 64 top-hat spectrometers to sample the three-dimensional velocity space every 30 (150)ms for electrons (ions) <cit.>. The 30ms resolution electron measurements from the FPI Dual Electron Spectrometer (DES)have resulted in the first electron scale measurements of a dayside magnetopause current sheet associated with magnetic reconnection <cit.>. <cit.> reported electron crescent shaped velocity distributions consistent with those observed in two-dimensional particle-in-cell (PIC) simulations near the electron diffusion region (EDR) (e.g., <cit.>), suggesting that the EDR was contained within the MMS tetrahedron.Several explanations of the electron crescent distributions have since appeared in the literature. <cit.> modeled electron Speiser orbits <cit.> in a one-dimensional current sheet with a normal electric field, using Liouville's theorem to show how crescents can be produced from an assumed isotropic velocity distribution at the magnetic neutral sheet. <cit.> used a similar model to explain the crescents as consequence of cusp-like electron orbits resulting from acceleration by the normal electric field.Both <cit.> and <cit.> invoke meandering electron orbits to explain the crescents, suggesting that the observation of crescents can, by comparison with two-dimensional PIC simulations, be used to infer proximity to the EDR. In contrast, <cit.> argue that the electron crescents can be understood by a simple drift-kinetic model in which the non-gyrotropic electron distributions observed by MMS can be expressed in terms of an equivalent guiding center distribution: f( x,v, t) = F_g( X_g,v-v_g, v_∥, v_⊥, t) where x and v are the electron position and velocity, v_g is the guiding center drift (including the E × B, magnetic gradient and curvature drifts), X_g ≡ x - ρ( x,v, t) is the electron guiding center location, ρ≡ v× b/Ω_e is the electron gyroradius vector, b is the unit vector in the direction of the magnetic field B, Ω_e = q B/(m_e c) is the electron gyrofrequency, and v_∥ and v_⊥ are the electron velocity components parallel and perpendicular to the magnetic field.Note that Eqn. (<ref>) makes no assumption about the size of the electron gyroradius relative to the scale over which F_g( X_g,v-v_g, v_∥, v_⊥, t) varies; one only assumes that in the frame of the guiding center drift all of the gyrophase dependence of f( x,v, t) can be explained by spatial structure of the gyrotropic guiding center distribution. In particular, that Eqn. (<ref>) allows large deviations from gyrotropy in the electron velocity phase space density measured at a given point and associated perpendicular currents despite the fact that the electrons are strongly magnetized.To understand how strongly magnetized electrons can produce a significant perpendicular current in the E × B frame, we consider the electron momentum equation, neglecting the inertia terms: n eE + n eV_e × B/c+ ∇· P_e = 0 where n is the plasma density (quasineutrality assumed), V_e is the electron bulk velocity, and P_e is the electron pressure tensor (defined in the electron bulk flow frame). Separating the electron pressure tensor into its gyrotropic and non-gyrotropic components,P_e =P_eg + Π_e (where P_eg = P_e∥ b b + P_e⊥( I -b b), P_e⊥ = [Tr( P_e)- P_e∥]/2, and Π_e is the non-gyrotropic component), the perpendicular component of (<ref>) can be written as follows: n eE_⊥ =- n eV_e × B/c - ∇_⊥ P_e⊥ - (P_e∥-P_e⊥) κ - (∇·Π_e)_⊥ where κ =b·∇ b is the magnetic curvature. Equation (<ref>) shows that the electron perpendicular bulk velocity can differ significantly from cE× B/B^2 (where B is the magnetic field magnitude) even when the divergence of the non-gyrotropic component of the pressure tensor vanishes. The essential point is that sub-ion scale electron pressure gradients and associated electron diamagnetic drift, represented by the second term on the right hand side of (<ref>), may produce significant electron current density in the E × B frame even when the electrons are strongly magnetized <cit.>.We emphasize, however, that such diamagnetic drift should also be present at an asymmetric current sheet in which electrons exhibit meandering orbits from the high density magnetosheath to the low density magnetosphere.<cit.> examined the terms in the generalized Ohm's law for the <cit.> EDR event and found significant deviations of the perpendicular electron bulk velocity from c E× B/B^2 that were associated with the divergence of the full electron pressure tensor; however, they did not further separate the electron pressure tensor into its gyrotropic and non-gyrotropic components, so that an important question remains unanswered:Are the non-gyrotropic electron crescent distributions observed by <cit.> a manifestation of the electron diamagnetic drift of strongly magnetized electrons in a thin sub-ion scale current sheet? In what follows, we address this question using a new method we have developed to extract 7.5ms plasma moments from the MMS FPI data. The details of the technique for generating 7.5ms electron and 37.5ms ion moments from the raw data is outside the scientific scope of this paper and is available in the supplemental materials.§ DATA AND RESULTS In Figure <ref> (a)-(d) we show the four 7.5ms separated electron velocity distributions that constitute the 30ms FPI product (e).The complex crescent structure of the 30ms distribution, with alternating sections of high and low energy plasma have been referred to colloquially as `fingers' of the crescent distribution.We observe that the energy and flux of the plasma increases from distributions (a) to (d), over a time interval less than 30 ms. The composite image shows that a simple image-stacking of the four distributions roughly recovers the 30 ms distribution, validating our intermediate distributions. Comparing Figure <ref> (e) the 30ms FPI product with (f) the intermediate distributions (a)-(d) allows us to conclude that the complex energy structure of the crescent is a result of time-aliasing the four evolving crescent distributions.<cit.> demonstrated that MMS encountered an electron diffusion region (EDR) on 2015-10-16. They identified the EDR based on several criteria:1) a bipolar exhaust signature in the L component (in boundary normal coordinates) of the ion bulk velocity, 2) a strong perpendicular current in which the electron perpendicular bulk velocity differs significantly from E × B drift, 3) a strong depression in the magnetic field magnitude (suggesting a magnetic null in the reconnection plane), 4) strong parallel electron heating, 5) a strong electric field pointing outward along the current sheet normal, 6) crescent shaped electron velocity distributions, and 7) a strong J_⊥· E_⊥ signature in the electron rest frame.The presence of the electron crescent velocity distributions supporting J_⊥· E_⊥ > 0 is the most compelling evidence for proximity of MMS to the EDR, having been predicted by PIC simulations (e.g., <cit.>) as a feature of the flow stagnation region. However, it is interesting to note that the onset of perpendicular crescents observed by MMS2 (at about 13:07:02.16 UT) is during the rising edge of J_M and earlier than the onset of large amplitude electric field fluctuations (at about 13:07:02.2 UT).Recently, <cit.> interpreted the large amplitude electric field fluctuations as nonlinear electrostatic whistler structures that can produce intense localized magnetic dissipation that drives magnetic reconnection at the boundary between closed and open magnetic field lines, where the perpendicular crescents are just beginning to transition to parallel crescents.Figure <ref> shows the electric field and electron pressure at 30ms and 7.5ms in an interval prior to entering the EDR where we expect the electrons to be E × B drifting (frozen-in). The perpendicular components of the electric field are shown in red (FPI at 30ms resolution), green (FPI at 7.5ms resolution) and black (EDP 8kHz data averaged to 7.5ms). It is clear that the FPI 7.5ms perpendicular bulk velocity recovers additional structure that can be explained by E × B drift. This improved agreement between E_⊥ and - V_e × B/c serves as validation of our 7.5ms moments algorithm. On the other hand, the electron parallel and perpendicular pressures (Figure <ref>, (d)-(e)) appear to have converged already at 30ms resolution – not much additional structure is recovered at 7.5ms.Figure <ref> shows that the deviation of the electron perpendicular bulk velocity from E × B drift coincides with the onset of electron diamagnetic drift. That is, in the region of non-gyrotropic perpendicular electron crescents prior to region of large electric field fluctuations, the perpendicular gradient of the perpendicular electron pressure is making the dominant contribution to E_⊥ +V_e × B/c, as shown in Figure <ref> panels (b)-(d) and (f)-(h). This result suggests that the strong non-gyrotropy of the electron velocity distributions in this interval is a manifestation of the energy-dependent magnetic gradient drifts that – when integrated over velocity space – produce an electron pressure gradient contribution in equation (<ref>).Prior to the onset of J_M, there is an electron pressure gradient signal but no corresponding current density.This displacement in time between the pressure gradient signal and the J_M signal is a result of averaging the J_M measurements from the four observatories as they cross into the current sheet.The finite difference pressure gradient signal does not produce such a delay.During the interval of large electric field fluctuations beginning at 13:07:02.2 UT, the electron momentum equation does not appear to be satisfied at FPI 7.5ms resolution. Panel (j) shows that this discrepancy cannot be explained by the non-gyrotropic component of the electron pressure tensor. Possible explanations for the discrepancy include time variability on the scale of the FPI energy sweep and smoothing of spatial structures by the four spacecraft gradient operator.§ DISCUSSION AND CONCLUSIONS In summary, we have shown that the deviation of the electron bulk velocity from E × B drift observed between 13:07:01.199 UT and 13:07:02.180 UT can be explained by electron diamagnetic drift in an electron scale current sheet. In the region where the out-of-plane current is transitioning to in-plane current, the electron momentum equation is not satisfied at 7.5ms FPI resolution. Since electron diamagnetic drift does not itself produce any magnetic energy dissipation, our result is consistent with the observation of J · E' ≈ 0 between 13:07:01.990 UT and 13:07:02.180 UT by <cit.>. The observation of perpendicular crescents preceding the region of J · E' > 0 suggests that the presence of perpendicular crescents alone does not imply magnetic energy dissipation. However, we emphasize that our results do not rule out the existence of meandering electron orbits and associated magnetic energy dissipation (as argued by <cit.>) or anomalous resistivity (as suggested by <cit.>) since the associated features in the electron velocity distribution may be very difficult to measure.Our results raise important questions about the nature of magnetic energy dissipation at the magnetopause. In steady laminar reconnection, with a reconnection rate of about 0.1 V_A B/c (where V_A is the Alfvén speed), the corresponding reconnection electric field is on the order of 0.1 - 1mV/m. The observed electric field fluctuations in the interval where the electron momentum equation is not satisfied, however, are much larger than that, approaching 50-100mV/m and varying over the 7.5 ms time scale of the FPI energy sweep. What role do these fluctuations play in changing the magnetic field topology and dissipating magnetic energy?What is their contribution to the global integrated reconnection rate?<cit.> has recently suggested that large amplitude electric field fluctuations over a region much more localized than that of the out-of-plane current density directly drive reconnection by producing localized J· E' at the boundary between open and closed magnetic field lines. Our results, demonstrating that the electrons are diamagnetically drifting in the region of perpendicular crescents prior to the onset of large amplitude electric field fluctuations, are consistent with this suggestion.However, it is also possible that the global reconnection rate is supported by electron meandering orbits interacting with a much smaller electric field on the order of 0.1 V_A B/c (the global integral of which gives the reconnection rate), as shown in two-dimensional PIC simulations (e.g., <cit.>). Although the observation of crescents by themselves does not imply meandering orbits, crescent distributions observed in such close proximity to an electron-scale magnetic field reversal supports the idea of meandering orbits <cit.>.A third possibility is that turbulent fluctuations facilitate anomalous transport at the magnetopause. For example, there is evidence from three-dimensional PIC simulations of the <cit.> event that lower hybrid turbulence (driven by the diamagnetic drift) can lead to anomalous heating and transport of plasma from the sheath onto closed magnetic field lines <cit.>. <cit.> has suggested that violation of the Generalized Ohm's Law at 30 ms resolution might be evidence of such anomalous resistivity, and our 7.5 ms results have not eliminated this possibility.Further progress will require the development of new techniques that move beyond the calculation of velocity moments and extract information about phase space density and its velocity space gradients on time scales shorter than the FPI 7.5 ms energy sweep.This research was supported by the NASA Magnetospheric Multiscale Mission in association with NASA contract NNG04EB99C. IRAP contributions to MMS FPI were supported by CNES and CNRS. We thank the entire MMS team and instrument leads for data access and support. The L2 data of MMS can be accessed from MMS Science Data Center (https://lasp.colorado.edu/mms/sdc/public/).16urlstyle[Bessho et al.(2016)]BesshoCrescents Bessho, N., et al. (2016), Electron distribution functions in the diffusion region of asymmetric magnetic reconnection, Geophysical Research Letters, 43(5), 1828–1836, 10.1002/2016GL067886, 2016GL067886.[Burch et al.(2016a)]Burch2016 Burch, J. L., et al. (2016a), Magnetospheric multiscale overview and science objectives, Space Science Reviews, 199(1), 5–21, 10.1007/s11214-015-0164-9.[Burch et al.(2016b)]BurchScienceEDR Burch, J. L., et al. (2016b), Electron-scale measurements of magnetic reconnection in space, Science, 352(6290), 10.1126/science.aaf2939.[Burch et al.(2017)]burch_agu_whistler Burch, J. L., et al. (2017), Whistler driven magnetic reconnection at the dayside magnetopause, AGU, presented at 2017 Chapman Meeting, AGU, Chengdu, Sichuan Province, China.[Chen et al.(2016)]chen_2016_energization Chen, L.-J., et al. (2016), Electron energization and mixing observed by mms in the vicinity of an electron diffusion region during magnetopause reconnection, Geophysical Research Letters, 43(12), 6036–6043, 10.1002/2016GL069215, 2016GL069215.[Egedal et al.(2016)]EgedalCrescents Egedal, J., et al. (2016), Spacecraft observations and analytic theory of crescent-shaped electron distributions in asymmetric magnetic reconnection, Phys. Rev. Lett., 117, 185,101, 10.1103/PhysRevLett.117.185101.[Hesse et al.(2011)]HesseEDRReview Hesse, M., et al. (2011), The diffusion region in collisionless magnetic reconnection, Space Sci. Rev., 160, 3–23.[Hesse et al.(2014)]HesseCrescents Hesse, M., et al. (2014), On the electron diffusion region in planar, asymmetric, systems, Geophysical Research Letters, 41(24), 8673–8680, 10.1002/2014GL061586, 2014GL061586.[Hoffman and Bracken(1965)]hoffman_bracken_1965 Hoffman, R. A., and P. A. Bracken (1965), Magnetic effects of the quiet-time proton belt, Journal of Geophysical Research, 70(15), 3541–3556, 10.1029/JZ070i015p03541.[Jain and Sharma(2009)]Neeraj Jain, N., and A. S. Sharma (2009), Electron scale structures in collisionless magnetic reconnection, Physics of Plasmas, 16(5), 050,704, 10.1063/1.3134045.[Le et al.(2017)Le, Daughton, Chen, and Egedal]LeLowerHybridTurbulenceMMS Le, A., W. Daughton, L.-J. Chen, and J. Egedal (2017), Enhanced electron mixing and heating in 3-d asymmetric reconnection at the earth's magnetopause, Geophysical Research Letters, 44(5), 2096–2104, 10.1002/2017GL072522, 2017GL072522.[Pollock et al.(2016)]pollock_instrument_paper Pollock, C., et al. (2016), Fast plasma investigation for magnetospheric multiscale, Space Science Reviews, 199(1), 331–406, 10.1007/s11214-016-0245-4.[Shay et al.(2007)]shay_two_scale Shay, M. A., et al. (2007), Two-scale structure of the electron dissipation region during collisionless magnetic reconnection, Phys. Rev. Lett., 99, 155,002, 10.1103/PhysRevLett.99.155002.[Shay et al.(2016)]ShayCrescents Shay, M. A., et al. (2016), Kinetic signatures of the region surrounding the x line in asymmetric (magnetopause) reconnection, Geophysical Research Letters, 43(9), 4145–4154, 10.1002/2016GL069034, 2016GL069034.[Speiser(1965)]SpeiserOrbits01 Speiser, T. W. (1965), Particle trajectories in model current sheets: 1. analytical solutions, Journal of Geophysical Research, 70(17), 4219–4226, 10.1029/JZ070i017p04219.[Torbert et al.(2016)]TorbertOhmsLaw Torbert, R. B., et al. (2016), Estimates of terms in ohm's law during an encounter with an electron diffusion region, Geophysical Research Letters, 43(12), 5918–5925, 10.1002/2016GL069553, 2016GL069553.
http://arxiv.org/abs/1706.08435v3
{ "authors": [ "A. C. Rager", "J. C. Dorelli", "D. J. Gershman", "V. Uritsky", "L. A. Avanov", "R. B. Torbert", "J. L. Burch", "R. E. Ergun", "J. Egedal", "C. Schiff", "J. R. Shuster", "B. L. Giles", "W. R. Paterson", "C. J. Pollock", "R. J. Strangeway", "C. T. Russell", "B. Lavraud", "V. N Coffey", "Y. Saito" ], "categories": [ "physics.space-ph" ], "primary_category": "physics.space-ph", "published": "20170626152038", "title": "Electron crescent distributions as a manifestation of diamagnetic drift in an electron scale current sheet" }
http://arxiv.org/abs/1707.02858v1
{ "authors": [ "Ralf Hofmann" ], "categories": [ "physics.gen-ph" ], "primary_category": "physics.gen-ph", "published": "20170627084147", "title": "The isolated, uniformly moving electron: Selfintersecting SU(2) Yang-Mills center vortex loop and Louis de Broglie's hidden thermodynamics" }
Institute of Physics, Academia Sinica, Taipei 11529, Taiwan Supernovae and Weinberg's Higgs Portal Dark Radiation and Dark Matter Kin-Wang [email protected] December 30, 2023 =====================================================================The observed burst duration and energies of the neutrinos from Supernova 1987Astrongly limit the possibility of any weakly-interacting light particle species being produced in the proto-neutron star (PNS) core and leading to efficientenergy loss.We reexamine this constraint on Weinberg's Higgs portal model, in which the dark radiation particles (the Goldstone bosons) and the dark matter candidate(a Majorana fermion) interact with Standard Model (SM) fields solely through themixing of the SM Higgs boson and a light Higgs boson. In order for the Goldstone bosons to freely stream out of the PNS core region, the Higgs portal coupling has to be about a factor of 4–9 smaller than the currentcollider bound inferred from the SM Higgs invisible decay width. We find that in the energy loss rate calculations, results obtained byusing the one-pion exchange (OPE) approximation and the SP07 global fits for thenucleon-nucleon total elastic cross section differ only by a factor ≲ 3. The SN 1987A constraints surpass those set by laboratory experiments or by theenergy loss arguments in other astrophysical objects such as the gamma-ray bursts, even with other nuclear uncertainties taken into account. Furthermore, the SN 1987A constraints are comparable to bounds from the latestdark matter direct search for low-mass WIMPs (≲ 10.)Supernovae and Weinberg's Higgs Portal Dark Radiation and Dark Matter Kin-Wang [email protected] December 30, 2023 =====================================================================§ INTRODUCTIONSN 1987A was a type II supernova discovered on February 24, 1987 by Shelton, Duhalde and Jones. The progenitor star was Sanduleak -69^∘ 202, a blue supergiant in theLarge Magellanic Cloud. Thanks to its proximity of about 51  kpc to the Earth, neutrino burst events from the core collapse of the progenitor star could be recorded at the underground laboratories Irvine-Michigan-Brookhaven (IMB), Kamiokande II, and Baksanseparately <cit.>. The observed burst duration of about 12 seconds, individual energies up to 40,as well as the integrated total energy of 𝒪 (10^53  erg),confirmed the standard picture of neutrino cooling of the proto-neutronstar (PNS) <cit.>. A proto-neutron star is formed when the collapsingstellar core of the progenitor star reaches nuclear saturation density. Being initially hot and lepton rich, the PNS keeps contracting as it cools anddeleptonise, to become a neutron star as the final supernova remnant.See Refs. <cit.> for the PNS structureand the evolution, and Ref. <cit.> for the most recent review onneutrino emission from supernovae.Emission of light exotic particles in nuclear interactions in the PNS core have been considered exhaustively in the literature, notably theaxions <cit.>,right-handed neutrinos <cit.>,Kaluza-Klein gravitons <cit.>,Kaluza-Klein dilatons  <cit.>,unparticles <cit.>,dark photons <cit.>, dark matter <cit.>,dilation <cit.>, saxion <cit.> etc. Simulations of PNS in the neutrino-emitting phase were done inRefs. <cit.> for the axion,and in Ref. <cit.> for the KK-gravitons. By comparing the predicted neutrino burst signals with the SN 1987A observations, very stringent constraints were obtained on the properties of the exotic particles. For a quick comparison without invoking simulations, Raffelt has derived a bound on the emissivity of light exotic particles based on the argument that they should not affect the total cooling time significantly <cit.>. In this work we shall reexamine the SN 1987A constraints on Weinberg's Higgsportal model <cit.>, which was proposed to account for the dark radiation in the early universe.The effect of the dark radiation on the cosmic microwave background (CMB) data is parametrised as the contribution to the effective number of light neutrino species N_ eff.The conflict between the value of the Hubble constant H_0 from the Planck CMB dataand local determination may be remedied by assuming an addition ofΔ N_ eff = 0.4–1 to the standard value of N_ν = 3.046 by the dark radiation component <cit.> (see, however, also Ref. <cit.>.) In this model, Weinberg considered a global U (1) continuous symmetry associatedwith the conservation of some quantum number, and introduced a complex scalar fieldto break it spontaneously. The radial field of the complex scalar field acquires a vacuum expectation value (vev), and mixes with the Standard Model (SM) Higgs field. The Goldstone bosons arising from the symmetry breaking would be massless, and their characteristic derivative coupling would make them very weakly-interactingat sufficiently low temperatures. The latter property is crucial, since the Goldstone bosons must decouple from the early universe thermal bath at the right moment so that their temperature is a fraction ofthat of the neutrinos (see e.g. Ref. <cit.>.) Collider phenomenology of Weinberg's Higgs portal model has been investigated inRefs. <cit.>. Weinberg has also extended this minimal set-up to include a Majorana fermionas a Weakly-Interacting Massive Particle (WIMP) dark matter candidate. Ref. <cit.> has shown that results of the dark matter directsearch experiments LUX <cit.> provide very strong constraints,which are slightly strengthened by the XENON1T experiment <cit.> very recently.Previously we have examined energy losses due to the emission of Weinberg'sGoldstone bosons in a post-collapse supernova core <cit.> in the limit of large radial field mass. Subsequently we scrutinised the production and propagation of Weinberg's Goldstonebosons in the initial fireballs of gamma-ray bursts for more generalcases <cit.>. In this work we extend our previous analysis and consider in greater detailGoldstone boson production by nuclear bremsstrahlung processes in the proto-neutronstar core of SN 1987A. In Sec. <ref> we briefly review Weinberg's Higgs portal model fordark radiation and dark matter. In Sec. <ref> we calculate energy loss rate due to Goldstoneboson emission by two methods, i.e. using the one-pion exchange approximation and using experimental data of low-energy nucleon collisions. In Sec. <ref> we estimate the mean free path of the Goldstone bosons as a function of their emission energies, and determine the free-streamingrequirements. Our results in these two sections are then used in Sec. <ref>to derive supernova constraints on Weinberg's Higgs portal model by invokingRaffelt's criterion.We then confront our SN 1987A constraints with those from accelerator experiments, gamma-ray burst observations, and dark matter direct search experiments. In Sec. <ref> we summarise our work. § WEINBERG'S HIGGS PORTAL MODEL In this section we briefly summarise Weinberg's model <cit.>following the convention of Refs. <cit.>. Consider the simplest possible broken continuous symmetry, a global U (1) symmetry associated with the conservation of some quantum number W. A single complex scalar field S (x) is introduced for breaking this symmetry spontaneously.With this field added to the Standard Model (SM), the Lagrangian is ℒ = (∂_μ S^†) (∂^μ S )+ μ^2 S^† S - λ (S^† S)^2 - g (S^† S) (Φ^†Φ) + ℒ_ SM.where Φ is the SM Higgs doublet, μ^2, g, and λ are real constants, and ℒ_ SM is the usual SM Lagrangian. One separates a massless Goldstone boson field α (x) and a massive radial field r (x) in S (x) by definingS (x) = 1/√(2)( + r (x) )e^2 i(x).where the fields α (x) and r (x) are real. In the unitary gauge, one sets Φ^ T = (0,+ φ (x) )/√(2) where φ (x) is the physical Higgs field. The Lagrangian in Eq. (<ref>) thus becomesℒ = 1/2(∂_μ r ) (∂^μ r )+ 1/2( + r )^2/^2(∂_μ) (∂^μ) +μ^2/2( + r )^2 - λ/4( + r )^4 - g/4( + r )^2 ( + φ)^2 + ℒ_ SM,where the replacement (x) → (x) / ( 2 ) was madein order to achieve a canonical kinetic term for the (x) field. The two fields φ and r mix due to the g (S^† S) (Φ^†Φ) term, with their mixing angle given bytan 2 θ = 2 g /m^2_H - m^2_h,where m_H and m_h are the masses of the two resulting physical Higgs bosonsH and h, respectively. The heavier one is identified with the SM Higgs boson with m_H = 125, while the lighter one is assumed to have a mass in the range of MeV to hundreds of MeV. In this model, the interaction of the Goldstone bosons with the SM fields arisesentirely through the SM Higgs boson in the mixing of the φ and r fields.The light Higgs boson h decays dominantly to a pair of Goldstone bosons, with the decay width given byΓ_h = 1/32 πm^3_h/^2.When kinematically allowed, there is also a probability for h decaying into apair of SM fermions as well as a pair of pions <cit.>.The Higgs effective coupling to nucleons, f_N m_N /≡ g_N N H, has been calculated for the purpose of investigating the sensitivities of thedark matter direct detection experiments <cit.>. Ref. <cit.> found g_N N H = 0.0011,which corresponds to f_N ≃ 0.288. It was pointed out in Ref. <cit.> that the effective Higgs-nucleon coupling has a wide range of values, 0.0011 ≤ g_N N H≤ 0.0032,due to uncertainties in the pion-nucleon sigma term. The authors of Ref. <cit.> have done a statistical analysis to inferthe value of f_N from more up-to-date lattice evaluations of the nucleonmatrix elements. By exploiting two possible statistical distributions for the strangeness matrixelement, they found f_N = 0.3 ± 0.03 and f_N = 0.3 ± 0.01 at the 68% confidence level, respectively. This model is also extended to include a dark matter candidate by adding one Dirac fieldℒ_ψ = i ψ̅γ·∂ψ - m_ψψ̅ψ - f_χ/√(2)ψ̅^c ψ S^† - f^∗_χ/√(2)ψ̅ψ^c S,and assigning a charge U (1)_W = 1 for it. One expresses the field as ψ (x) = ψ^' (x) e^i α (x), and expands the Lagrangian after the radial field achieves a vev (for details see Ref. <cit.>.) Diagonalising the ψ^' mass matrix generates the mass eigenvaluesm_± = m_ψ± f_χ,for the two mass eigenstates ψ_±, which are Majorana fermions. The Lagrangian is nowℒ_ψ = i/2ψ̅_±γ·∂ψ_±- 1/2 m_±ψ̅_±ψ_± - i/4 (ψ̅_+ γψ_- - ψψ_- γψ_+ ) ·∂- f_χ/2r (ψ̅_+ ψ_+ -ψ̅_- ψ_- ),and one needs to use the massive representationr = cosθh + sinθH for the interactions of ψ_±.The heavier fermion decays into the lighter fermion by emitting a Goldstone boson,while the lighter one is stable due to unbroken reflection symmetry. The latter can thus play the role of the WIMP dark matter,with mass m_- ≡ m_χ in the range of GeV to TeV. Its relic density has been calculated in Ref. <cit.>. Model parameters in the minimal set-up are m_h, g, and , and including m_χ and f_χ in the extended version. From the SM Higgs invisible decay width, a collider bound on the Higgs portalcouplingg < 0.011,has been derived in Ref. <cit.>. In the future, the International Linear Collider (ILC) may reach a sensitivity ofconstraining the branching ratio of SM Higgs invisible decays to< 0.4–0.9% <cit.> in the best scenarios. If this can be realised, the collider bound on the Goldstone boson coupling will beimproved by a factor of 5 ∼ 7. Experimental limits on meson invisible decay widths have also been turned into constraints on the φ-r mixing angle in Ref. <cit.>,which we list in Sec. <ref>. There is also the perturbativity condition, which requires for the quarticself-coupling of the S fieldλ = m^2_h/^2≤ 4 π.In Weinberg's Higgs portal model including the dark matter candidate, exclusion limits on the WIMP-nucleon elastic cross section set by the null results of thedirect search experiments have been found to put very strong bounds on the mixingangle in Ref. <cit.>. § GOLDSTONE BOSON PRODUCTION IN PROTO-NEUTRON STAR COREIn the PNS core, the dominant Goldstone boson production channel is the nuclear bremsstrahlung processes N N → N N. Low-energy nuclear interactions have been studied quite thoroughly by various experiments, while theoretical calculation remains a difficult task. Taketani, Nakamura and Sasaki <cit.> suggested to divide thenuclear forces into three regions: classical (long-range),a dynamical (intermediate range), and a phenomenological or core (short-range) region. In the classical region, the one-pion exchange (OPE) dominates thelongest range part of the potential.In the intermediate range the two-pion exchange (TPE) is most important,where heavier mesons may also become relevant. In the short-range region, multi-pion exchange, heavy mesons, quark-gluon exchanges are expected to be responsible.At present N N potentials calculated using the chiral effective field theoryto the fifth order (N^4 LO) <cit.>and the sixth order (N^5 LO) <cit.> are available,which can reproduce the experimental data to outstanding precision. See e.g. Refs. <cit.> for reviews on nucleon-nucleon interactions.As for nuclear bremsstrahlung processes, in Refs. <cit.> neutrino pair production in core-collapse supernovae was studied using chiral effectivefield theory to the fourth order (N^3 LO). It was found that shorter-range noncentral forces significantly reducethe neutrino rates compared to the one-pion exchange (OPE)approximation <cit.>, which was typically used in supernova simulations or in deriving supernova bounds on exotic particles. More recently, Ref. <cit.> goes beyond the OPE approach anduses T-matrix based formalism from Ref. <cit.> in their supernova simulations. The approach of using phase shift data to fix the on-shell NN scattering amplitudesand making the soft-radiation approximation has already been taken inRef. <cit.> much earlier. It was found therein that the resultant rates are roughly a factor of four belowearlier estimates based on an OPE NN amplitude.In this section we make the same comparison in Weinberg's Higgs portal model. §.§ Energy loss rate using one-pion exchange approximation The OPE contribution to the nuclear forces takes care of the long-range interactions and the tensor force. From the Lagrangian describing the pion coupling to nucleons ℒ_π^0 N N = - g_π^0 ψ̅i γ^5 τ_3 ψ φ^(π^0), where N = n, p, the potential is V_ OPE () = - (f_π/m_π)^2 (_1 ·) (_2 ·)/||^2 + m^2_π (τ⃗_1 ·τ⃗_2 ),with k⃗ the momentum exchange, and _j and τ⃗_i the spin and isospin operators of the incoming nucleons, respectively. The neutral pion-nucleon coupling constant isg^2_π^0 / 4 π = ( 2 m_N f_π / m_π)^2 /( 4 π) ≈ 14 <cit.>, with f_π≈ 1. In the one-pion exchange (OPE) approximation (see e.g. Ref. <cit.>),there are four direct and four exchange diagrams, corresponding to the Goldstone bosonpairs being emitted by any one of the nucleons. Summing all diagrams and expanding in powers of (T / m_N ),the amplitude for the nuclear bremsstrahlung processes N (p_1)N (p_2) → N (p_3)N (p_4)(q_1)(q_2)is <cit.> ∑_ spins |ℳ^ OPE_N N → N N |^2 ≈64 (f_Ng m_N/m^2_H)^2(2 m_N f_π/m_π)^4 (q_1 · q_2)^2/(q^2 - m^2_h)^2 + m^2_h Γ^2_h · (-2 q^2)^2m^2_N/(2 p · q)^4 {||^4/(||^2 + m^2_π)^2 +||^4/(||^2 + m^2_π)^2 + ||^2 ||^2 - 2 |·|^2/(||^2 + m^2_π) (||^2 + m^2_π) + ... },where q ≡ q_1 + q_2, and k ≡ p_2 - p_4 and l ≡ p_2 - p_3 are the4-momenta of the exchanged pion in the direct and the exchange diagrams,respectively. In addition, Goldstone boson pairs can be emitted from the exchanged pion due toan effective Higgs-pion coupling. The amplitude for this process is∑_ spins |ℳ^ OPE(pion)_N N → N N |^2 ≈4 (g/m^2_H)^2(2 m_N f_π/m_π)^4 (q_1 · q_2)^2/(q^2 - m^2_h)^2 + m^2_h Γ^2_h (2/9)^2· (q^2 + 11/2 m^2_π)^2{k^2_1 k^2_2/(k^2_1 - m^2_π)^2(k^2_2 - m^2_π)^2 +l^2_1 l^2_2/(l^2_1 - m^2_π)^2(l^2_2 - m^2_π)^2+ (k_1 · k_2) (l_1 · l_2) + .../(k^2_1 - m^2_π) (k^2_2 - m^2_π)(l^2_1 - m^2_π) (l^2_2 - m^2_π) }.where k_1 ≡ p_1 - p_3, k_2 ≡ p_2 - p_4, l_1 ≡ p_1 - p_4, and l_2 ≡ p_2 - p_3, with k_1 + k_2 = l_1 + l_2 = q. However, with q^2 ≈ m^2_h, k^2_1 ≃ - ||^2 and similarly for k^2_2, l^2_1, and l^2_2, this contribution is subdominant. The volume energy loss rate isQ_N N → N N= 𝒮/2!∫d^3 q⃗_⃗1⃗/2 ω_1(2 π)^3d^3 q⃗_⃗2⃗/2 ω_2(2 π)^3 ∫∏^4_i=1d^3 p⃗_⃗i⃗/2 E_i(2 π)^3f_1 f_2 (1-f_3) (1-f_4)× ∑_ spins|ℳ_N N → N N |^2(2 π)^4δ^4 (p_1 + p_2 - p_3 - p_4 - q_1 - q_2)(ω_1 + ω_2),where ω_1, ω_2 are the energy of the Goldstone bosons in the final state. The symmetry factor 𝒮 is 1/4 for n n and p pinteractions, whereas for n p interactions it is 1. The nucleon occupation numbers are f_i = 1 / (e^(E_i - μ_N) / T + 1),where in the non-relativistic limit the nucleon energies are E_i ≃ m_N + |p⃗_i|^2/2 m_N + U_N.Here μ_N is the chemical potential of the nucleon, and U_N is the mean-fieldsingle-particle potential in which the nucleons move. In Ref. <cit.> it is pointed out that due to the extremeneutron-rich conditions in the PNS core, the mean-field potentials for neutrons and protons can differ significantly, with the difference directly related to the nuclear symmetry energy (see e.g. Refs. <cit.>for recent reviews). Non-zero U_n - U_p was found therein to have a strong impact on the spectra and luminosities of the supernova emitted neutrinos. In any case the nucleon occupation numbers are normalised to the nucleon number density, n_N = X_Nn_B = ∫^∞_0 2d^3 p⃗_i/(2 π)^3f_i (p⃗_i),where X_N with N = n, p, are the neutron and the proton fraction, respectively. The relative abundances of the neutrons, protons, electrons, and the neutrinos in the PNS core are determined by the conditions of kinetic and chemical equilibrium, as well as charge neutrality.Therefore the neutron fraction X_n parametrises the underlying nuclear equation ofstate and indicates the level of neutron degeneracy.We perform the integral over the Goldstone boson momenta first∫d^3 q⃗_⃗1⃗/ω_1 d^3 q⃗_⃗2⃗/ω_2 (q_1 · q_2)^2/(q^2 - m^2_h)^2 + m^2_h Γ^2_h (2 q^2)^2/(2 p · q)^4 ω = 2 (2 π)^2/m^4_N∫^∞_0 d ω ω^4I_1 (ω, m_h, ),where ω = ω_1 + ω_2. The dimensionless integral is defined byI_1 (ω, m_h, ) ≡∫^1_0 d ω̃∫^+1_-1d cosθ ^5(1-)^5(1-cosθ)^4/[2 (1-)(1-cosθ) - m^2_h/ω^2]^2 + m^2_h Γ^2_h/ω^4,with ≡ω_1 / ω, and θ is the angle between the twoemitted Goldstone bosons.As the integral over the nucleon momenta in Eq. (<ref>) is not easy to evaluate, we follow the conventional approach of taking the non-degenerate and the degenerate limit in the following. As we will show, energy loss rate due to Goldstone boson emission calculated in these two limits have distinct dependences on the PNS core temperature T and neutron fraction X_n therein. §.§.§ Non-degenerate limit The initial-state nucleon occupation numbers are given by the non-relativistic Maxwell-Boltzmann distribution f_i (p⃗_i) = (n_N / 2) (2 π / m_N T)^3/2 e^- |p⃗_i|^2 / 2 m_N T. The integration is simplified by introducing the center-of-mass momenta, so that p⃗_1, 2 = P⃗±p⃗_i ,andp⃗_3, 4 = P⃗±p⃗_f. The d^3 P⃗ integral can be performed separately. The energy loss rate in the non-degenerate limit is thenQ^ OPE(ND)_N N → N N=𝒮√(π)/(2 π)^6(3- 2 β/3)I_0n^2_N (f_N gm_N/m^2_H)^2 (2 m_N f_π/m_π)^4 ·T^5.5/m^4.5_N.Here we have defined the integral I_0 byI_0 (T, m_h, ) ≡∫ dudvdxx^4I_1 (x, T, m_h, ) √(uv)e^-u δ (u - v - x),with u ≡ |p⃗_i|^2 / m_N T, v ≡ |p⃗_f|^2 / m_N T, and x ≡ω / T. The β term is β≡3/I_0∫ dudvdxx^4I_1 (x, m_h, ) √(uv)e^-u δ (u - v - x) ∫^+1_-1d z/2|·|^2/||^2 ||^2,where z ≡(p⃗_i ·p⃗_f ) / |p⃗_i| |p⃗_f|, the angle between p⃗_i and p⃗_f. In the resonance region, one can make use of the limit of the Poisson kernel lim_ϵ→ 01/πϵ/a^2 + ϵ^2 = δ (a),and obtainI^ Pk_1 (ω, m_h, ) ≈π/32m^7_h/Γ_h ω^6.Since this approximation is valid when m^2_h / ω^2 ≈ 2(1 - ), where the latter ≤ 1, it is only applicable for ω≥ m_h and Γ_h ≪ω. We have checked that, for m_h = 500 and = 10, this approximationstill works well.This is equivalent to considering the production of a real light Higgs boson h, for whichQ^ OPE (ND)_N N → N N h = 𝒮 √(π)/4 ^4(3 - 2 β/3)n^2_N (f_Ng m_N/m^2_H)^2(2 m_N f_π/m_π)^4 m^4_h/m^9/2_NT^1/2 ×∫^∞_m_h / T d x √(x^2 - m^2_h/T^2)/x^3∫^∞_0 d ud v √(u v)e^-u δ (u - v - x).And indeed we find that for m_h ≲ 500,Q^ P k _N N → N N ≈Q_N N → N N h ×ℬ (h →),with ℬ (h →)= Γ_h → / Γ_h the branching ratio of the light Higgsboson h decaying into a pair of Goldstone bosons. Thus we find that in the parameter range we consider in this work, Goldstone boson production in the PNS core is dominated by the production of a real light Higgs boson h and its subsequent decay. This is a very distinct feature from the nuclear bremsstrahlung emission of amassless scalar, e.g. the dilaton <cit.>, or a massive stable scalar such as the saxion <cit.>.§.§.§ Degenerate limit We calculate the energy loss rate due to Goldstone boson emission, Eq. (<ref>), in the degenerate limit following Ref. <cit.>. The integral over the Goldstone boson momenta is done as inEq. (<ref>) and Eq. (<ref>) first. In the degenerate limit, the nucleon momenta integral is simplified by d^3 p⃗_i = |p⃗_j|^2 d |p⃗_i|d Ω_i ≈ p_ F (n)m_N d E_j. The neutron Fermi momentum is p_ F (n) = (3 π^2 n_n )^1/3,with the neutron number density n_n = X_n ρ / m_N given byEq. (<ref>). One then perform the integral< F_N N>≡ (4 π)^2/A∫∏^4_i=1 d Ω_i δ^3 (p⃗_1 + p⃗_2 - p⃗_3 - p⃗_4) × {||^4/(||^2 + m^2_π)^2 + ||^4/(||^2 + m^2_π)^2 + ||^2 ||^2 - 2 |·|^2/(||^2 + m^2_π) (||^2 + m^2_π) +... }=3 - 5 x tan^-1( 1/x) +x^2/1 + x^2 + x^2/√(1 + 2 x^2)tan^-1( √(1 + 2 x^2)/x^2),with A = (4 π)^5 / 2 p^3_ F (n), and x ≡ m_π / 2 p_F (n). The level of nucleon degeneracy is characterised by the |·|^2 term. In the case of strong degeneracy, |·|^2 = 0. Note also that in the degenerate limit, the pion mass terms m^2_π in the braces cannot be neglected. Finally performing the integral over the nucleon energies yields∫∏^4_i=1 d E_if_1f_2(1 - f_3)(1 - f_4) δ (E_1 + E_2 - E_3 - E_4 - ω) = T^3J_ (y),with y ≡ω / T, andJ_ (y) = - 1/6(y^3 + 4 π^2 y ) (1 - e^y )^-1. The energy loss rate in the degenerate limit is thenQ^ OPE(D)_N N → N N= 𝒮/(2π)^94 < F_N N>I_ (f_Ngm_N/m^2_H)^2 (2 m_N f_π/m_π)^4p_F (n) T^8/m^2_N,with the function given byI_ (m_h, ) ≡∫^∞_0 d yy^4I_1 (y, m_h, )J_ (y).We evaluate I_ numerically using the VEGAS subroutine both directly andusing the limit of the Poisson kernel, Eq. (<ref>). Here we also checked that Goldstone boson production can be well described by the production of a real light Higgs boson and its subsequent decay. We compare the results in these two limits at the nuclear saturation densityρ = 3 · 10^14  g/ cm^3.In Fig. <ref> the comparison is made at the PNS coretemperature T = 30 and neutron fraction X_n = 1 and 0.7. Energy loss rate calculated in the two limits have different dependence on X_n: Q^ (ND)_N N → N N ∝ X^2_n, andQ^ (D)_N N → N N ∝ X^1/3_n. In Fig. <ref> the comparison is made at two different PNS core temperature T = 30 and 20.It was pointed out that in the case of a mixture of neutrons and protons, in thedegenerate limit the energy loss rate for n p → n p dominates that for n n → n n and p p → p p, for all lepton fraction Y_p values. In Ref. <cit.> the axion emission rate was evaluated numerically for arbitrary neutron degeneracies.It was found therein that the non-degenerate, analytical rate is a very goodapproximation.More recently, neutrino processes in post-collapse supernova core was studied in the partially-degenerate regime in Ref. <cit.>. In this work we consider n n interactions with X_n = 1 in the non-degeneratelimit. §.§ Energy loss rate using phase shifts data One can also use the experimentally measured cross sections for N N elasticscattering to obtain amplitude estimates for the nuclear bremsstrahlungprocesses. Many independent observables are available from the nucleon-nucleon elasticscattering data collected by the EDDA Experiment at the Cooler Synchrotron (COSY) inJülich <cit.>, experiments at the SATURNE II acceleratorat Saclay, at the PSI, Ohio University, JINR, TSL in Uppsala, TUNL etc.(see e.g. Ref. <cit.>.) In NN interactions, the values of the total spin S⃗ and total angularmomentum J⃗ = L⃗ + S⃗ are conserved, but that of theorbital angular momentum L⃗ may change because of the tensor force. Therefore for S = 1, partial wave states ℓ_< = |J - 1| andℓ_> = J + 1 can couple to each other.In this case the scattering S-matrix has a 2 × 2 matrix structure,parametrised by the mixing angle ϵ_J. The diagonal elements are given by e^2 i δ_ℓ_< cos 2 ϵ_J and e^2 i δ_ℓ_> cos 2 ϵ_J, respectively, and the off-diagonal elements are bothie^i (δ_ℓ_> + δ_ℓ_>) sin 2 ϵ_J. Phase shifts δ_ℓ S J and mixing anglesϵ_J for a wide range of laboratory kinetic energies T_ lab areavailable at the Nijmegen NN-OnLine website <cit.>. Full data and a number of fits to data are available on theSAID database <cit.>.In the energy range below 25  MeV, there are numerous measurements on the total np cross section, but not on pp due to the large Coulomb effects.Therefore the uncertainties in the latter are larger.A nice summary of the general formalism for two-body scattering of spin-1/2 particles can be found in Ref. <cit.>. The total cross section for pp elasticscattering is simplyσ_N N = 2 π ∑_J (2 J + 1)|f_J (k⃗_ cm )|^2 = 2 π/|k⃗_ cm|^2 ∑_J (2 J + 1) sin^2 δ_ℓ S J(k⃗_ cm ),where k⃗_ cm is the momentum in the centre-of-mass system,related to the laboratory kinetic energy as |k⃗_ cm|^2 = 1/2 m_pT_ lab, with m_p the proton mass.§.§.§ Global fits of total elastic cross sections In this work we use the SP07 and LE08 global fits for the total proton-proton and neutron-proton elastic scattering cross sections σ_pp and σ_np <cit.>, respectively, as shown inFig. <ref>. The errors quoted therein are quite small, ranging from 0.01  mb for lowincident energies to 0.8  mb at most for high incident energies. The huge cross section at zero-energy indicates that there is a two-body bound state, or quasi-bound state, as manifested in the negative scattering lengths a_pp≈ -17.1  fm and a_n p≈ -23.74  fm (see e.g. Ref. <cit.>.) We also plot the NN elastic scattering cross section calculated using the OPE approximation, where for simplicity we neglect the pion mass m_π in thethe braces in the amplitude expression∑_ spins |ℳ^ OPE_N N → N N|^2=4 (2 m_N f_π/m_π)^4{||^4/(||^2 + m^2_π)^2 +||^4/(||^2 + m^2_π)^2+ ||^2||^2 +2 (·)^2 - 2 (||^2 + ||^2) (·)/(||^2 + m^2_π) (||^2 + m^2_π)}.As expected, the OPE approximation is good only forT_ lab≃ 10–20. For larger laboratory kinetic energies, it overetimates by a factor of 10(for T_ lab≃ 100–400) to4 (for T_ lab≃ 800–1000).Results in Ref. <cit.> show that for low energy scattering,d σ_N N / d Ω has no strong angular dependence.Therefore we simply use ∑_ spins |_N N|^2 ≈ 64|_N N|^2 m^4_N to infer |_N N|^2 as a function of the center-of-mass energy E^2_ cm≈ 4 m^2_N + 2 m_N T_ lab. With this information, we estimate the amplitude squared for thenuclear bremsstrahlung processes N N → N N ∑_ spins |ℳ^ exp_N N → N N |^2 ≈1024|𝒜_N N|^2 (f_Ng m_N/m^2_H)^2(q_1 · q_2 )^2/(q^2 - m^2_h )^2 + m^2_h Γ^2_h(-2 q^2 )^2/(2 p · q )^4 m^6_N,after summing over 64 direct and exchange diagrams. To evaluate the phase space integral in the energy loss rate calculation, we take the non-degenerate limit, and proceed as in the OPE case. The energy loss rate is thenQ^ exp(ND)_N N → N N= 32 𝒮/^6I^exp_0n^2_B (f_Ng m_N /m^2_H)^2 T^11/2/m^1/2_N.Here we define the integralI^exp_0 (T, m_h, )≡ ∫ dudvdxdyx^4 I_1 (x, T, m_h, ) √(y)e^-y √(u v)e^-u δ(u - v - x )·|_NN|^2(u, y ),with y ≡ |P⃗|^2 / m_N T. The result obtained by using the SP07 global fit to the σ_pp data is plotted in Fig. <ref> and compared to the OPE result.The overestimation by OPE happens to be milder for N N → N N than in N N → N N, because of the different kinematics of the exchanged pion in the nuclear bremsstrahlung processes from that in the elastic scattering.For neutrino emission from the n n → n n νν̅ processes, Ref.  <cit.> used on-shell N N amplitudes measured by experiments and found that the OPE approximation overestimated the energy loss rate by abouta factor of four. Ref. <cit.> found that the next-to-next-to-next-to-leading order(N^3LO) chiral effective field theory calculations differ by about a factor 2–3 from leading order (LO) results, and the result obtained by using the experimental phase shifts data is very similar to the N^3LO ones.Since the central contact terms in the chiral effective field theory do not contribute in the nuclear bremsstrahlung processes, the leading-order term is solely the one-pion exchange potential. For axions, the OPE approximation is also found to oversimplify the nuclear dynamicsand overestimate the emission rate by a factor of four <cit.>.§.§.§ Chiral effective field theory predictionsCharge independence breaking (CIB) of the strong NN interactions refers to thedifference between the isospin I=1 states: the proton-proton (I_z = +1), the neutron-proton (I_z = 0), and the neutron-neutron (I_z = -1) interactions, after electromagnetic effects are removed. Charge symmetry breaking (CSB) concerns the difference between the pp and nn interactions only. CIB is clearly seen in Fig. <ref>, while a small amount of CSB is observed in the measured scattering lengthsa_nn and a_pp, as well as the effective range r_nn and r_p p. A detailed discussion on charge-dependence of nuclear interactions can be found in Ref. <cit.> (see also Ref. <cit.>.) Very recently, Ref. <cit.> provides pp, nn and np phase shifts predicted by the chiral effective field theory to the N^4 LO. In all partial waves, the predicted n p phase shifts and mixing angles at this orderare shown to agree excellently with the Nijmegen multi-energy <cit.>and the SP07 single-energy analysis <cit.>.Charge-dependence due to pion-mass splitting is taken into account in the one-pion exchange terms only, while nucleon-mass splitting is always included.Fig. <ref> shows total pp and nn elastic crosssections calculated with Eq. (<ref>) using the N^4 LO chiral effective field theory phase shifts fromRef. <cit.>. The pp results agree very well with the SP07 global fit results. For T_ lab≲ 10  MeV, Coulomb force in pp collisions issignificant. At larger laboratory kinetic energies, chiral effective field theory calculationspredict that the effects of charge symmetry breaking is ≲ 3% only. In this work we therefore use the experimental data and setσ_nn = σ_pp. Low-energy theorems <cit.> state that thefirst two terms in the series expansion of the bremsstrahlung amplitude in powers of the energy loss may be exactly calculated by using thecorresponding elastic, i.e. non-radiative, amplitude.In Ref. <cit.> it was argued that the model-independent approachof relating the nuclear bremsstrahlung amplitudes to the on-shell N Nscattering amplitudes measured by experiments is not applicable to scalar particles such as the saxion. The reason is that the contributions to the leading order terms(∝ω^-1) from the emissionof a scalar particle from external nucleon legs cancel each other, which does not happen for axion and neutrino pairs <cit.>, orKK-gravitons <cit.>. The next-to-leading order term (∝ω^0) includes theemission diagrams of the scalar particle from external legs as well as frominternal lines, where the latter is not calculable due to the unknown interaction vertices, and may be dominant. In Weinberg's Higgs portal model, we also found the cancellation of the leading order terms between thediagrams for the Goldstone boson pairs being emitted from the external nucleon legs. The effective Higgs-pion coupling is ∝ (q^2 + 11/2m^2_π) /,so the emission from internal lines is of order 𝒪 (ω^0)as well in the low-energy limit (cf. Eq. (<ref>)). However, in Weinberg's Higgs portal model Goldstone boson production in thePNS core is dominated by the emission of a real light Higgs boson in nuclear bremsstrahlung processes and its subsequent decay. Therefore for small light Higgs boson mass m_h the low-energy theorems should still be applicable. This remains to be verified by using the chiral effective field theory to calculate the emission of the light Higgs boson h from the external nucleon legs as well as from the internal lines.§ GOLDSTONE BOSON PROPAGATION IN PROTO-NEUTRON STAR COREIn the weakly-interacting regime, the Goldstone boson mean free path is set by theelastic scattering rate R_ N → N. In the strongly-interacting regime, the absorption rateR_N N → N N may be comparable.The mean free path in the former case isl_ mfp = (n_B σ_ N → N)^-1,while in the latter case, the mean free path against absorption isl^ absorb._ mfp = (n^2_B σ_ N N → N N)^-1. For axions, Ref. <cit.> has considered the free-streaming regime,while Ref. <cit.> the trapping regime.The amplitude squared for the elastic process(q_1) N (p_1) → (q_2) N (p_1) isσ_ N → N = 4 f^2_N g^2m^2_N/m^4_φ(q_1 · q_2)^2 [(p_1 · p_2 ) + m^2_N ]/(t - m^2_r)^2.We follow Ref. <cit.> to calculate the reaction rate R_ N → N =n_B σ_ N → N= ∫2 d^3p⃗_1/(2 π)^3 f (p⃗_1) 1/2 ω_12 E_1 ∫d^3q⃗_2/(2 π)^32 ω_2× ∫d^3p⃗_2/(2 π)^32 E_2 [1 - f (p⃗_2) ] 1/2∑_ spins |ℳ_ N → N|^2(2 π)^4 δ^4 (p_1 + q_1 - p_2 - q_2).Using the polar angle cosθ≡p⃗_1 ·q⃗_1 / |p⃗_1||q⃗_1| and the azimuthal angel ϕ^' which is measured from the(p⃗_1, q⃗_1)-plane, the 9-dimensional integral can be simplified toR_ N → N = 1/(2 π)^3 m^4_N/4 ω_1f^2_N g^2 m^2_N/m^4_φ∫^∞_1 d ϵ_1 f (ϵ_1) √(ϵ^2_1 - 1)∫^+1_-1d cosθ/λ (ϵ_1, u_1, cosθ) ×∫^ϵ^ max_2_ϵ^ min_2d ϵ_2[1 - f (ϵ_2) ] ∫^2 π_0 d ϕ^'/2 πF_3,with the dimensionless variablesϵ_1 ≡ E_1 / m_N, ϵ_2 ≡ E_2 / m_N,and u_1 ≡ω_1 / m_N.The functions in the above equation are defined asλ (ϵ_1, u_1, cosθ) ≡|p⃗_1 + q⃗_1|/m_N = √(ϵ^2_1 - 1 + u^2_1 + 2 u_1 (ϵ^2_1 - 1 )^1/2cosθ),andF_3 ≡[q_1 ·(p_1 + q_1 - p_2 ) ]^3+ 2 m^2_N [q_1 ·(p_1 + q_1 - p_2 ) ]^2/[ 2 q_1 ·(p_1 + q_1 - p_2 ) + m^2_r ]^2m^2_N,respectively, and the limits for the d ϵ_2 integration are determined to beϵ^ max ,min_2 = 1/2 [ϵ_1 + u_1 ±λ (ϵ_1, u_1, cosθ) + 1/ϵ_1 + u_1 ±λ (ϵ_1, u_1, cosθ)].To evaluate q_1 · p_2, we need to know the anglecosθ_q_1 p_2≡cosθ^'cosΔ_2 - sinθ^'sinΔ_2 cosϕ^',wherecosΔ_1 = √(ϵ^2_1 - 1) + u_1 cosθ/λ,cosΔ_2 = u_1 + √(ϵ^2_1 - 1)cosθ/λ,with Δ_1 + Δ_2 = θ, and cosθ^' = E_2 (E_1 + ω_1 ) - p_1 · q_1- m^2_N/|p⃗_⃗2⃗| |p⃗_⃗1⃗ + q⃗_⃗1⃗|.We evaluate Eq. (<ref>) numerically using the VEGAS subroutine. For low incident Goldstone boson energies ω_1 ≪ m_N, the nuclear recoileffects can be neglected, and so the interaction rate can also be easily estimated byR_ N → N =n_B σ_ N → N =n_B ω^4_1/16 πf^2_N g^2/m^4_φ∫^+1_-1 d cosθω^2_1 (1 - cosθ)^3 + 2 m^2_N(1 - cosθ)^2/[2 ω^2_1 (1 - cosθ)+ m^2_r ]^2.We found that the results from this method agree with those from the full calculationwithin 20% for ω_1 ≲  100 MeV. In Fig. <ref> we plot the Goldstone boson mean free pathl_ map times the Higgs portal coupling g^2 versus the light Higgs boson mass m_h, for various incident Goldstone boson energies ω_1.Goldstone boson pairs are emitted with an average energy of ω̅/T = 1/TQ_N N → N N /n^2_N < σ_N N → N Nv_ M>,where v_ M is the Møller velocity. In Fig. <ref> we choose to plot the ratio of the Goldstone bosonaverage emission energy to the light Higgs boson mass m_h.The curve indicates again that for m_h ≲ 500 Goldstone boson emissionis still dominated by the production of a real light Higgs boson h.We divide the free-streaming and the trapping regime byl_ mfp≫ R_ PNS and l_ mfp≪ R_ PNS, respectively. The neutron star radius is about 10  km <cit.>,depending on the equation of state(see Refs. <cit.> for recent reviews.) But the proto-neutron star radius is about 10–20  km at post-bounce times ≲ 3  s, slightly larger than that of neutron stars,as shown in the simulations of e.g. Ref. <cit.>. Therefore, if the Higgs portal coupling saturates the collider bound g ≤ 0.011,the Goldstone bosons would be trapped in the PNS core.In this case they still contribute to the cooling of the PNS core, and one needs to estimate the opacity of the medium to the Goldstone bosons as in Ref. <cit.> for axions.The amplitudes for the Goldstone boson pair absorption rate,∑_ spins |ℳ_N N → N N|^2, are the same asfor the nuclear bremsstrahlung energy loss rate. For simplicity, in this work we consider only the free-streaming regime by demandingg ≲√(g^2l_ mfp (ω̅)/R_ PNS)≡ g_ fs, for each light Higgs boson mass m_h. We plot the Goldstone boson free-streaming criterion g_ fs inFig. <ref>, assuming R_ PNS = 20  km for theproto-neutron star radius. For m_h ≲ 50, it is beyond the projected sensitivity of futurecollider experiments for SM Higgs invisible decay (cf. Eq. (<ref>)).§ SN 1987A CONSTRAINTS ON WEINBERG'S HIGGS PORTAL MODELIdeally one should do numerical simulations as inRefs. <cit.> to study the effects of the additional cooling agent on the neutrino burst signal. Here we simply invoke Raffelt's analyticalcriterion <cit.> on the energy loss rate per unit massdue to the emission of an exotic species Xϵ_X ≡Q_X/ρ≲ 10^19erg· g^-1· s^-1,as shown in Fig. <ref>, Fig. <ref>, andFig. <ref>. It is to be applied at typical PNS core conditions, i.e. at a temperatureT = 30, and baryon mass densityρ = 3 · 10^14  g /cm^3. The SN 1987A constraint on Weinberg's Higgs portal model is obtained by finding themodel parameters g andfor each light Higgs boson mass m_h such that the energy loss rate due to Goldstone boson emissionQ_N N → N N< Q_X.In the resonance region of producing a real light Higgs boson h,where the approximation with Poisson kernel limit is applicable,we have seen that Q_N N → N N ∝ g. Therefore we scale the estimates for this quantity calculated using the one-pionexchange (OPE) approach and the SP07 global fits to the elastic pp cross section,both in the non-degenerate (ND) limit, and assuming σ_nn = σ_pp(cf. Fig. <ref>) to be below Q_X. Our main results are presented in Fig. <ref>. In these SN 1987A constraints, the collider bound and the free-streaming criterion ong (Eq. (<ref>) and Eq. (<ref>),respectively), as well as the perturbativity condition on (Eq. (<ref>)) are all satisfied. We find that using OPE and the SP07 global fits results only in a factorof 2.6 difference for 10 ≲ m_h ≲ 50, and a factor of1.4 for m_h > 300. Uncertainty from the effective Higgs-nucleon coupling f_N is ∼ 10%. We have not included Goldstone boson production from np bremsstrahlung processes, which would strengthen both bounds. Quantifying and discussing many-body and medium effects, or the impact of nucleon effective masses <cit.> in nuclear interactions are beyond the scope of this work.Nevertheless, Fig. <ref> makes clear that with nuclear uncertainties taken into account, the SN 1987A constraintsstill surpass those set by laboratory experiments <cit.>,or by energy loss argument in other astrophysical objects <cit.>,which we briefly summarise below. As first pointed out in Ref. <cit.>, decays of B mesons to K mesons plus missing energy can be an efficient probe of GeV or sub-GeV scalar dark matter. In Refs. <cit.> this consideration has beenapplied to Weinberg's Higgs portal model. If the light Higgs boson is lighter than 354, the decay of K meson toa pion plus missing energy is a more powerful probe. We follow Ref. <cit.> and use the most stringent constraint on the decay branching ratios,ℬ (B^+ → K^+ + h) < 10^-5,by the BaBar experiment <cit.>, andℬ (K^+ →π^+ + h) < 10^-10,by the E787 and E949 experiments <cit.> at theBrookhaven National Laboratory. The former imposes a constraint on the φ - r mixing angle(Eq. (<ref>)) thatθ < 0.0016, for m_h < m_B - m_K, while the latter θ < 8.7 · 10^-5, for m_h < m_K - m_π = 354  MeV. Recently, the LHCb Collaboration has published upper limits on the branching fraction ℬ (B^+ → K^+ X) ×ℬ (X →μ^+ μ^-), where X is a hypothetical long-livedscalar particle <cit.>.The limits at the 95% confidence level vary between 2 · 10^-10 and10^-7, for the scalar particle mass in the range 250  < m (X) < 4700and lifetime in the range 0.1  ps < τ (X) < 1000  ps. However, since in Weinberg's Higgs portal model we find ℬ (h →μ^+ μ^-) ≲ 10^-12, the LHCb upper limits are not applicable. Also shown in Fig. <ref> are exclusion curves derived usingradiative Upsilon decays,ℬ (Υ (nS) →γ + h) < 3 · 10^-6, as well as muon anomalous magnetic moment, Δ a_μ = 288 · 10^-11. Neither of them is useful to constrain g.In our previous work <cit.> we have derived constraints usinggamma-ray bursts (GRB) observations. Due to resonance effects, Goldstone boson pairs can be rapidly producedby electron-positron annihilation process in the initial fireballs of the GRBs. On the other hand, the mean free path of the Goldstone bosons is larger than thesize of the GRB initial fireballs, so they are not coupled to the GRB's relativistic flow and can lead to significant energy loss. Our GRB energy loss criterion is Q_e^+ e^- → Δ t^'≈ Q_e^+ e^- → 1/Γ_0Δ R_0/β_0≳ℰ/Γ_0 V_0,where we used generic values for the GRB initial fireballs, such as total energyℰ = 10^52  erg, temperature T_0 = 18  MeV as well as 8  MeV, radius R_0 = 10^6.5  cm, wind velocity β_0 = 1 /√(3), and the Lorentz factor is Γ_0 = 1 / √(1 - β^2_0). In fact, the GRB bounds on g have a slight dependence on the Higgs portal coupling g, which becomes visible when the light Higgs boson decay braching ratio to a pair of SM fermions, Γ_h → f f̅, is no longernegligible compared to that to a pair of Goldstone bosons,Γ_h →. We therefore considered g = 0.011 saturating the current collider bounds, as well as g = 0.0015 which might be probed by future collider experiments. The region bounded by the two GRB exclusion curves, including the filled regionsaround them, represents the parameter space in Weinberg's Higgs portal model that can be probed by GRB physics. The GRB bounds are subject to large uncertainties, and are much weaker than theSN 1987A constraints. However, they are competitive to current laboratory constraints in the mass range of m_h / T_0 ≲ 10–15. We conclude here that Weinberg's Higgs portal model is another example to elucidate that high-energy astrophysical objects are excellent laboratory for particle physics. In the extended version of Weinberg's Higgs portal model, the spin-independent WIMP-nucleon elastic scattering cross section is (following the definition given in e.g. Ref. <cit.>) σ^ SI_χ N =4/π ( 1/√(2))^2μ^2_χ N(f_χ gf_N m_N/m^2_H m^2_h)^2.Here μ_χ N = M_χ m_N / (M_χ + m_N) is the WIMP-nucleon reduced mass. Latest exclusion limits published by the dark matter direct search experiments LUX <cit.>, PANDA-X <cit.>, andXENON1T <cit.> are translated into constraints on the parameter combination f_χg/ m^2_h forWIMP mass M_χ ranging from 6 to 1. In order to make a comparison to the SN 1987A and laboratory constraints,the WIMP coupling is fixed by requiring the relic density to beΩ_χ h^2 ≃ 0.11, which yieldsf_χ≈ 0.02 √(M_χ) <cit.>. The DM constraint was first derived in Ref. <cit.>, and herein Fig. <ref> is shown for some representative values of WIMP massM_χ = 6, 10 and 100. Note that it does not become more stringent for larger WIMP masses,because the experimental limits on σ^ SI_χ N also scalesapproximately with √(M_χ) for M_χ≥ 100.We conclude that SN 1987A constraints are comparable to bounds from DM directsearch results for M_χ≲ 10, while DM bounds forM_χ≳ 100 are the strongest bounds among all on Weinberg's Higgsportal model.§ SUMMARYWeinberg's Higgs portal model is another example to elucidate that high-energy astrophysical objects such as the supernovae and gamma-ray burstsare excellent laboratory for particle physics. In this model, massless Goldstone bosons arising from the spontaneous breaking of a U(1) symmetry play the role of the dark radiation.The model was also extended to include a Majorana fermion of mass in the GeV to TeVrange as the dark matter candidate. Both particle species couple to the Standard Model fields solely through the SMHiggs boson.Goldstone boson production in the proto-neutron star core is dominated by theemission of a real light Higgs boson in nuclear bremsstrahlung processes and its subsequent decay. The SN 1987A constraint on Weinberg's Higgs portal model is obtained by finding theparameter regions for the Higgs portal coupling g, and the vacuum expectation value of the light Higgs boson , for each light Higgs boson mass m_h, such that the energy loss rate due to Goldstone boson emission satisfy the Raffelt criterion.In order to invoke this criterion, the Higgs portal coupling g is required to be smaller than the current collider bound inferred from the SM Higgs invisible decay,so that the Goldstone bosons are not trapped inside the proto-neutron star core.We found that using the one-pion exchange (OPE) approximation and the SP07 global fits for the pp elastic cross section results only in a factor of 2.6 difference for 10 ≲ m_h ≲ 50, and a factor of 1.4 for m_h > 300. The SN 1987A constraints surpass those set by laboratory experiments or by energyloss arguments in other astrophysical objects, even with nuclear uncertainties takeninto account. In the extended version of Weinberg's Higgs portal model, latest exclusion limits published by the dark matter direct search experiments LUX, PANDA-X, andXENON1T are translated into constraints on the parameter combination f_χg/ m^2_h for WIMP mass M_χ ranging from 6 to 1. Fixing the WIMP coupling f_χ with the measured dark matter relic density, we found that SN 1987A constraints are comparable to bounds from DM directsearch results for WIMP mass M_χ≲ 10, while DM bounds forM_χ≳ 100 are the strongest bounds among all. § ACKNOWLEDGEMENTS We thank Xian-Wei Kang, Meng-Ru Wu, Tobias Fischer, and Jusak Tandean for thehelpful discussions. This work was supported in part by the Ministry of Science and Technology, Taiwan, ROC under the Grant No. 104-2112-M-001-039-MY3.99Raffelt:1996wa G. G. Raffelt,Chicago, USA: Univ. Pr. (1996) 664 pFischer:2009af T. Fischer, S. C. Whitehouse, A. Mezzacappa, F.-K. Thielemann and M. Liebendorfer,Astron. Astrophys.517 (2010) A80 doi:10.1051/0004-6361/200913106 [arXiv:0908.1871 [astro-ph.HE]].Mueller:2014rna B. Mller and H. T. Janka,Astrophys. J.788 (2014) 82 doi:10.1088/0004-637X/788/1/82 [arXiv:1402.3415 [astro-ph.SR]]. Camelio:2017nka G. Camelio, A. Lovato, L. Gualtieri, O. Benhar, J. A. Pons and V. Ferrari,arXiv:1704.01923 [astro-ph.HE]. Prakash:1996xs M. Prakash, I. Bombaci, M. Prakash, P. J. Ellis, J. M. Lattimer and R. Knorren,Phys. Rept.280 (1997) 1 doi:10.1016/S0370-1573(96)00023-3 [nucl-th/9603042]. Pons:1998mm J. A. Pons, S. Reddy, M. Prakash, J. M. Lattimer and J. A. Miralles,Astrophys. J.513 (1999) 780 doi:10.1086/306889 [astro-ph/9807040].<cit.> Nicotra:2005fj O. E. Nicotra, M. Baldo, G. F. Burgio and H.-J. Schulze,Astron. Astrophys.451 (2006) 213 doi:10.1051/0004-6361:20053670 [nucl-th/0506066]. Janka:2017vlw H.-T. Janka,arXiv:1702.08713 [astro-ph.HE].Raffelt:1987yt G. Raffelt and D. Seckel,Phys. Rev. Lett.60 (1988) 1793. doi:10.1103/PhysRevLett.60.1793 Turner:1987by M. S. Turner,Phys. Rev. Lett.60 (1988) 1797. doi:10.1103/PhysRevLett.60.1797 Mayle:1987as R. Mayle, J. R. Wilson, J. R. Ellis, K. A. Olive, D. N. Schramm and G. Steigman,Phys. Lett. B 203 (1988) 188. doi:10.1016/0370-2693(88)91595-X Brinkmann:1988vi R. P. Brinkmann and M. S. Turner,Phys. Rev. D 38 (1988) 2338. doi:10.1103/PhysRevD.38.2338 Janka:1995ir H. T. Janka, W. Keil, G. Raffelt and D. Seckel,Phys. Rev. Lett.76 (1996) 2621 doi:10.1103/PhysRevLett.76.2621 [astro-ph/9507023]. Hanhart:2000er C. Hanhart, D. R. Phillips, S. Reddy and M. J. Savage,Nucl. Phys. B 595 (2001) 335 doi:10.1016/S0550-3213(00)00667-2 [nucl-th/0007016]. Hanhart:2001fx C. Hanhart, J. A. Pons, D. R. Phillips and S. Reddy,Phys. Lett. B 509 (2001) 1 doi:10.1016/S0370-2693(01)00544-5 [astro-ph/0102063]. Hannestad:2003yd S. Hannestad and G. G. Raffelt,Phys. Rev. D 67 (2003) 125008Erratum: [Phys. Rev. D 69 (2004) 029901] doi:10.1103/PhysRevD.69.029901, 10.1103/PhysRevD.67.125008 [hep-ph/0304029]. Hannestad:2007ys S. Hannestad, G. Raffelt and Y. Y. Y. Wong,Phys. Rev. D 76 (2007) 121701 doi:10.1103/PhysRevD.76.121701 [arXiv:0708.1404 [hep-ph]]. Freitas:2007ip A. Freitas and D. Wyler,JHEP 0712 (2007) 033 doi:10.1088/1126-6708/2007/12/033 [arXiv:0708.4339 [hep-ph]]. Chang:2016ntp J. H. Chang, R. Essig and S. D. McDermott,JHEP 1701 (2017) 107 doi:10.1007/JHEP01(2017)107 [arXiv:1611.03864 [hep-ph]]. Guha:2015kka A. Guha, Selvaganapathy J. and P. K. Das,Phys. Rev. D 95 (2017) no.1,015001 doi:10.1103/PhysRevD.95.015001 [arXiv:1509.05901 [hep-ph]]. Ishizuka:1989ts N. Ishizuka and M. Yoshimura,Prog. Theor. Phys.84 (1990) 233. doi:10.1143/PTP.84.233 Arndt:2002yg D. Arndt and P. J. Fox,JHEP 0302 (2003) 036 doi:10.1088/1126-6708/2003/02/036 [hep-ph/0207098]. Keil:1996ju W. Keil, H. T. Janka, D. N. Schramm, G. Sigl, M. S. Turner and J. R. Ellis,Phys. Rev. D 56 (1997) 2419 doi:10.1103/PhysRevD.56.2419 [astro-ph/9612222]. Fischer:2016cyd T. Fischer, S. Chakraborty, M. Giannotti, A. Mirizzi, A. Payez and A. Ringwald,arXiv:1605.08780 [astro-ph.HE]. Raffelt:1990yz G. G. Raffelt,Phys. Rept.198, 1 (1990). Raffelt:2006cw G. G. Raffelt,Lect. Notes Phys.741, 51 (2008) [hep-ph/0611350]. Weinberg:2013kea S. Weinberg,Phys. Rev. Lett.110 (2013) no.24,241301 doi:10.1103/PhysRevLett.110.241301 [arXiv:1305.1971 [astro-ph.CO]]. Riess:2016jrr A. G. Riess et al.,Astrophys. J.826 (2016) no.1,56 doi:10.3847/0004-637X/826/1/56 [arXiv:1604.01424 [astro-ph.CO]]. Heavens:2017hkr A. Heavens, Y. Fantaye, E. Sellentin, H. Eggers, Z. Hosenie, S. Kroon and A. Mootoovaloo,arXiv:1704.03467 [astro-ph.CO]. Ng:2014iqa K. W. Ng, H. Tu and T. C. Yuan,JCAP 1409 (2014) no.09,035 doi:10.1088/1475-7516/2014/09/035 [arXiv:1406.1993 [hep-ph]]. Cheung:2013oya K. Cheung, W. Y. Keung and T. C. Yuan,Phys. Rev. D 89 (2014) no.1,015007 doi:10.1103/PhysRevD.89.015007 [arXiv:1308.4235 [hep-ph]]. Anchordoqui:2013bfa L. A. Anchordoqui, P. B. Denton, H. Goldberg, T. C. Paul, L. H. M. Da Silva, B. J. Vlcek and T. J. Weiler,Phys. Rev. D 89 (2014) no.8,083513 doi:10.1103/PhysRevD.89.083513 [arXiv:1312.2547 [hep-ph]]. Akerib:2016vxi D. S. Akerib et al.,arXiv:1608.07648 [astro-ph.CO]. Aprile:2017iyp E. Aprile et al. [XENON Collaboration],arXiv:1705.06655 [astro-ph.CO]. Keung:2013mfa W. Y. Keung, K. W. Ng, H. Tu and T. C. Yuan,Phys. Rev. D 90 (2014) no.7,075014 doi:10.1103/PhysRevD.90.075014 [arXiv:1312.3488 [hep-ph]]. Tu:2015lwv H. Tu and K. W. Ng,JCAP 1603 (2016) no.03,037 doi:10.1088/1475-7516/2016/03/037 [arXiv:1512.05165 [hep-ph]]. Drees:1993bu M. Drees and M. Nojiri,Phys. Rev. D 48 (1993) 3483 doi:10.1103/PhysRevD.48.3483 [hep-ph/9307208]. Jungman:1995df G. Jungman, M. Kamionkowski and K. Griest,Phys. Rept.267 (1996) 195 doi:10.1016/0370-1573(95)00058-5 [hep-ph/9506380]. Hisano:2011cs J. Hisano, K. Ishiwata, N. Nagata and T. Takesako,JHEP 1107 (2011) 005 doi:10.1007/JHEP07(2011)005 [arXiv:1104.0228 [hep-ph]]. Cheng:2012qr H. Y. Cheng and C. W. Chiang,JHEP 1207 (2012) 009 doi:10.1007/JHEP07(2012)009 [arXiv:1202.1292 [hep-ph]]. Cline:2013gha J. M. Cline, K. Kainulainen, P. Scott and C. Weniger,Phys. Rev. D 88 (2013) 055025Erratum: [Phys. Rev. D 92 (2015) no.3,039906] doi:10.1103/PhysRevD.92.039906, 10.1103/PhysRevD.88.055025 [arXiv:1306.4710 [hep-ph]].He:2013suk X. G. He and J. Tandean,Phys. Rev. D 88 (2013) 013020 doi:10.1103/PhysRevD.88.013020 [arXiv:1304.6058 [hep-ph]]. Anchordoqui:2013pta L. A. Anchordoqui and B. J. Vlcek,Phys. Rev. D 88 (2013) 043513 doi:10.1103/PhysRevD.88.043513 [arXiv:1305.4625 [hep-ph]]. Bechtle:2014ewa P. Bechtle, S. Heinemeyer, O. Stl, T. Stefaniak and G. Weiglein,JHEP 1411 (2014) 039 doi:10.1007/JHEP11(2014)039 [arXiv:1403.1582 [hep-ph]]. Taketani:1951 M. Taketani, S. Nakamura and M. Sasaki,Prog. Theor. Phys. 6 (1951), 581.Entem:2017gor D. R. Entem, R. Machleidt and Y. Nosyk,arXiv:1703.05454 [nucl-th]. Entem:2015xwa D. R. Entem, N. Kaiser, R. Machleidt and Y. Nosyk,Phys. Rev. C 92 (2015) no.6,064001 doi:10.1103/PhysRevC.92.064001 [arXiv:1505.03562 [nucl-th]]. Machleidt:2001rw R. Machleidt and I. Slaus,J. Phys. G 27 (2001) R69 doi:10.1088/0954-3899/27/5/201 [nucl-th/0101056]. Naghdi:2007ek M. Naghdi,Phys. Part. Nucl.45 (2014) 924 doi:10.1134/S1063779614050050 [nucl-th/0702078]. Machleidt:2011zz R. Machleidt and D. R. Entem,Phys. Rept.503 (2011) 1 doi:10.1016/j.physrep.2011.02.001 [arXiv:1105.2919 [nucl-th]]. Machleidt:2016vlh R. Machleidt,Symmetry 8 (2016) no.4,26. doi:10.3390/sym8040026 Bacca:2008yr S. Bacca, K. Hally, C. J. Pethick and A. Schwenk,Phys. Rev. C 80 (2009) 032802 doi:10.1103/PhysRevC.80.032802 [arXiv:0812.0102 [nucl-th]]. Bacca:2015tva S. Bacca, R. Sharma and A. Schwenk,arXiv:1509.08151 [nucl-th].Friman:1978zq B. L. Friman and O. V. Maxwell,Astrophys. J.232 (1979) 541. doi:10.1086/157313 Hannestad:1997gc S. Hannestad and G. Raffelt,Astrophys. J.507 (1998) 339 doi:10.1086/306303 [astro-ph/9711132]. Bartl:2016iok A. Bartl, R. Bollig, H. T. Janka and A. Schwenk,Phys. Rev. D 94 (2016) 083009 doi:10.1103/PhysRevD.94.083009 [arXiv:1608.05037 [nucl-th]]. Bartl:2014hoa A. Bartl, C. J. Pethick and A. Schwenk,Phys. Rev. Lett.113 (2014) 081101 doi:10.1103/PhysRevLett.113.081101 [arXiv:1403.4114 [nucl-th]]. Hanhart:2000ae C. Hanhart, D. R. Phillips and S. Reddy,Phys. Lett. B 499 (2001) 9 doi:10.1016/S0370-2693(00)01382-4 [astro-ph/0003445]. Limkaisang:2001yz V. Limkaisang, K. Harada, J. Nagata, H. Yoshino, Y. Yoshino, M. Shoji and M. Matsuda,Prog. Theor. Phys.105 (2001) 233. doi:10.1143/PTP.105.233 Babenko:2016idp V. A. Babenko and N. M. Petrov,arXiv:1604.02912 [nucl-th].MartinezPinedo:2012rb G. Martinez-Pinedo, T. Fischer, A. Lohs and L. Huther,Phys. Rev. Lett.109 (2012) 251104 doi:10.1103/PhysRevLett.109.251104 [arXiv:1205.2793 [astro-ph.HE]]. Baldo:2016jhp M. Baldo and G. F. Burgio,arXiv:1606.08838 [nucl-th]. Trautmann:2016ntm W. Trautmann, M. D. Cozma and P. Russotto,PoS Bormio 2016 (2016) 036 [arXiv:1610.03650 [nucl-ex]].Bacca:2011qd S. Bacca, K. Hally, M. Liebendorfer, A. Perego, C. J. Pethick and A. Schwenk,Astrophys. J.758 (2012) 34 doi:10.1088/0004-637X/758/1/34 [arXiv:1112.5185 [astro-ph.HE]]. Albers:2004iw D. Albers et al.,Eur. Phys. J. A 22 (2004) 125 doi:10.1140/epja/i2004-10011-3 [nucl-ex/0403045]. Wilkin:2016qio C. Wilkin,EPJ Web Conf.130 (2016) 01007. doi:10.1051/epjconf/201613001007 Arndt:2000xc R. A. Arndt, I. I. Strakovsky and R. L. Workman,Phys. Rev. C 62 (2000) 034005 doi:10.1103/PhysRevC.62.034005 [nucl-th/0004039]. Nijmegen http://nn-online.orgSAID http://gwdac.phys.gwu.eduKang:2014ioa X. W. Kang, PhD thesis, “Chiral Dynamics and Final State Interactions in Semileptonic B Meson Decay and Antinucleon-Nucleon Scattering,” University of Bonn (2014),http://hss.ulb.uni-bonn.de/2014/3714/3714.htm Arndt:2007qn R. A. Arndt, W. J. Briscoe, I. I. Strakovsky and R. L. Workman,Phys. Rev. C 76 (2007) 025209 doi:10.1103/PhysRevC.76.025209 [arXiv:0706.2195 [nucl-th]]. Arndt:2008uc R. A. Arndt, W. J. Briscoe, A. B. Laptev, I. I. Strakovsky and R. L. Workman,Nucl. Sci. Eng.162 (2009) 312 [arXiv:0806.1198 [nucl-ex]]. Konobeevski:2017mpw E. S. Konobeevski, S. V. Zuyev, V. I. Kukulin and V. N. Pomerantsev,arXiv:1703.00519 [nucl-th]. Stoks:1993tb V. G. J. Stoks, R. A. M. Klomp, M. C. M. Rentmeester and J. J. de Swart,Phys. Rev. C 48 (1993) 792. doi:10.1103/PhysRevC.48.792 Low:1958sn F. E. Low,Phys. Rev.110 (1958) 974. doi:10.1103/PhysRev.110.974 Adler:1966gc S. L. Adler and Y. Dothan,Phys. Rev.151 (1966) 1267. doi:10.1103/PhysRev.151.1267 Heller:1969ur L. Heller,Phys. Rev.174 (1968) 1580. doi:10.1103/PhysRev.174.1580Burrows:1990pk A. Burrows, M. T. Ressell and M. S. Turner,Phys. Rev. D 42 (1990) 3297. doi:10.1103/PhysRevD.42.3297 Tubbs:1975jx D. L. Tubbs and D. N. Schramm,Astrophys. J.201 (1975) 467. doi:10.1086/153909 Guillot:2013wu S. Guillot, M. Servillat, N. A. Webb and R. E. Rutledge,Astrophys. J.772 (2013) 7 doi:10.1088/0004-637X/772/1/7 [arXiv:1302.0023 [astro-ph.HE]]. Raithel:2016vtt C. A. Raithel, F. Ozel and D. Psaltis,Phys. Rev. C 93 (2016) no.3,032801 doi:10.1103/PhysRevC.93.032801 [arXiv:1603.06594 [astro-ph.HE]].Lattimer:2015nhk J. M. Lattimer and M. Prakash,arXiv:1512.07820 [astro-ph.SR]. Miller:2016pom M. C. Miller and F. K. Lamb,Eur. Phys. J. A 2016 52 [arXiv:1604.03894 [astro-ph.HE]].Baldo:2014yja M. Baldo, G. F. Burgio, H.-J. Schulze and G. Taranto,Phys. Rev. C 89 (2014) no.4,048801 doi:10.1103/PhysRevC.89.048801 [arXiv:1404.7031 [nucl-th]].Bird:2004ts C. Bird, P. Jackson, R. V. Kowalewski and M. Pospelov,Phys. Rev. Lett.93 (2004) 201803 doi:10.1103/PhysRevLett.93.201803 [hep-ph/0401195]. Huang:2013oua F. P. Huang, C. S. Li, D. Y. Shao and J. Wang,Eur. Phys. J. C 74 (2014) 8,2990 [arXiv:1307.7458 [hep-ph]].delAmoSanchez:2010bk P. del Amo Sanchez et al. [BaBar Collaboration],Phys. Rev. D 82 (2010) 112002 doi:10.1103/PhysRevD.82.112002 [arXiv:1009.1529 [hep-ex]].Artamonov:2009sz A. V. Artamonov et al. [BNL-E949 Collaboration],Phys. Rev. D 79 (2009) 092004 doi:10.1103/PhysRevD.79.092004 [arXiv:0903.0030 [hep-ex]]. Aaij:2016qsm R. Aaij et al. [LHCb Collaboration],Phys. Rev. D 95 (2017) no.7,071101 doi:10.1103/PhysRevD.95.071101 [arXiv:1612.07818 [hep-ex]]. Kurylov:2003ra A. Kurylov and M. Kamionkowski,Phys. Rev. D 69 (2004) 063503 doi:10.1103/PhysRevD.69.063503 [hep-ph/0307185]. Tan:2016zwf A. Tan et al. [PandaX-II Collaboration],Phys. Rev. Lett.117 (2016) no.12,121303 doi:10.1103/PhysRevLett.117.121303 [arXiv:1607.07400 [hep-ex]].
http://arxiv.org/abs/1706.08340v1
{ "authors": [ "Huitzu Tu", "Kin-Wang Ng" ], "categories": [ "hep-ph", "astro-ph.HE" ], "primary_category": "hep-ph", "published": "20170626123339", "title": "Supernovae and Weinberg's Higgs Portal Dark Radiation and Dark Matter" }
We prove that the class of convolution-type kernels satisfying suitable decay conditions of the Fourier transform, appearing in the works of Christ <cit.>, Christ-Rubio de Francia <cit.> and Duoandi­koet­xea-Rubio de Francia <cit.> gives rise to maximally truncated singular integrals satisfying a sparse bound by (1+,1+)-averages for all >0, with linear growth in ^-1. This is an extension of the sparse domination principle by Conde-Alonso, Culiuc, Ou and the first author <cit.>tomaximally truncated singular integrals.Our resultscover the rough homogeneous singular integrals on ^dT_Ω f(x)=p.v.∫_^d f(x-t) Ω(t/|t|)/|t|^d ṭwithangular part Ω∈ L^∞(S^d-1) and having vanishing integral on the sphere. Consequences of our sparse bound include novel quantitative weighted norm estimates as well as Fefferman-Stein type inequalities. In particular, we obtain that the L^2(w) norm of the maximal truncation of T_Ω depends quadratically on the Muckenhoupt constant [w]_A_2, extending a result originally by Hytönen, Roncal and Tapiola <cit.>.A suitable convex-body valued version of the sparse bound is also deduced and employed towards novel matrix weighted norm inequalities forthe maximal truncated rough homogeneoussingular integrals. Our result is quantitative, but even the qualitative statement is new, and the present approach via sparse domination is the only one currently known for the matrix weighted bounds of this class of operators. Magnetic fluctuations and superconducting properties of CaKFe_4As_4 studied by ^75As NMR Y. Furukawa, December 30, 2023 ========================================================================================§ INTRODUCTION AND MAIN RESULTS Let η∈ (0,1). A countable collection 𝒮 of cubes of ^d is said to be η-sparse if there existmeasurable sets {E_I:I ∈𝒮} such thatE_I ⊂ I, |E_I| ≥η |I|,I,J∈𝒮, I ≠ JE_I ∩ E_J= ∅. Let T be a sublinear operator mapping the space L^∞_0(^d) of complex-valued, bounded and compactly supported functions on ^d into locally integrable functions. We say that T has the sparse (p_1,p_2) bound <cit.> if there exists a constant C>0 such that for all f_1,f_2∈ L^∞_0(^d) we may find a 1/2-sparse collection 𝒮=𝒮(f_1,f_2) such that|ł Tf_1 , f_2|̊≤ C∑_Q∈𝒮 |Q| ∏_j=1^2 ł f_j _̊p_j,Qin which case we denote by T_(p_1,p_2), 𝗌𝗉𝖺𝗋𝗌𝖾 the least such constant C. As customary,ł f _̊p,Q = f1_Q_p/|Q|^1/p,p ∈ (0,∞]. Estimating the sparse norm(s) of a sublinear or multisublinear operator entails a sharp control over the behavior of such operator in weighted L^p-spaces; this theme has been recently pursued by several authors, see for instance <cit.>. This sharp control is exemplified in thefollowing proposition, which is a collection of known facts from the indicated references.Let T be a sublinear operator on ^dmappingL^∞_0(^d) toL^1_loc(^d). Then the following hold. 1. <cit.> Let 1≤ p_1, p_2<∞. There is an absoluteconstant C_p_2>0 such thatT:L^p_1(^d)→ L^p_1,∞(^d)≤ C_p_2T _(p_1,p_2), 𝗌𝗉𝖺𝗋𝗌𝖾 2. <cit.> IfΨ(t):=T _(1+1/t ,1+1/t),𝗌𝗉𝖺𝗋𝗌𝖾<∞∀ t>1,then there is an absoluteconstant C>0 such thatT_ L^2(w,^d) ≤ C [w]_A_2Ψ( C [w]_A_2).In particular,sup_t>1Ψ(t) <∞T_ L^2(w,^d)≤ C [w]_A_2.In this article, we are concerned with the sparse norms (<ref>) of a classofconvolution-type singular integrals whose systematic study dates back to the celebrated works byChrist <cit.>, Christ-Rubio de Francia <cit.>, and Duoandikoetxea-Rubio de Francia <cit.>, admitting a decomposition with good decay properties of the Fourier transform. To wit, let {K_s:^d→ℂ, s∈ℤ} be a sequence of (smooth) functions with the properties thatsuppK_s⊂ A_s:={x∈^d: 2^s-4< |x|_∞< 2^s-2}, sup_s∈ℤ2^sdK_s_∞≤ 1, sup_s∈ℤsup_ξ∈^dmax{|2^s ξ|^α,|2^s ξ|^-α} |K_s(ξ)| ≤ 1,for some α>0. Weconsider truncated singular integrals of the typeT f(x,t_1,t_2) = ∑_t_1 < s≤ t_2K_s*f(x),t_1,t_2 ∈ℤ,and theirmaximal versionT_⋆ f(x):=sup_t_1≤ t_2 |Tf(x,t_1,t_2) |.For all 0<<1T_⋆_(1+ , 1+),𝗌𝗉𝖺𝗋𝗌𝖾≲1/,with absolute dimensional implicit constant, in particular uniform over families {K_s} satisfying (<ref>).Theorem <ref> entails immediately a variety of novel corollaries involving weighted norm inequalities for the maximally truncated operators T_⋆. In addition to, for instance, those obtained by suitably applying the points of Proposition <ref>,we also detail the quantitative estimates below, whose proof will be given in Section <ref>.Let T be a sublinear operator satisfying the sparse bound (<ref>) with Ψ (t)≤ Ct. 1. For any 1<p<∞,T _L^p(w)≲ [w]_A_p^1/p([w]_A_p^1/p'+[σ]_A_∞^1/p) max{[σ]_A_∞, [w]_A_∞} with implicit constant possibly depending on p and dimension d; in particular,T _L^p(w)≲ [w]_A_p^2max{1,1/p-1}. 2. TheFefferman-Stein type inequalityT f_L^p(w)≲ p^2 (p')^1/p (r')^1+1/p'f_L^p(M_r w),r<p holds with implicit constant possibly depending ond only.3. The A_q-A_∞ estimate T f_L^p(w)≲ [w]_A_q^1/p [w]_A_∞^1+1/p'f_L^p(w),holds for q<p and w∈ A_q, with implicit constant possibly depending on p,q and d only.4. The following Coifman-Fefferman type inequality T f _L^p(w)≲[w]_A_∞^2/εM_1+ε f_L^p( w)holds for all ε>0 with implicit constant possibly depending on p andd only.Take Ω:S^d-1→ℂ with Ω_∞≤ 1 andhaving vanishing integral on S^d-1, and consider the associated truncatedintegrals and their maximal functionT_Ω,δ f(x): =∫_δ<|t|<1/δf(x-t) Ω(t/|t|)/|t|^d ṭ,T_Ω,⋆ f(x) : =sup_δ>0|T_Ω,δ f(x)|,x∈^d. It is well known– for instance, see the recent contribution <cit.>– thatT_Ω,⋆ f(x)≲M f(x) +T_⋆ f(x),x ∈^d with T_⋆ being defined as in (<ref>) for a suitable choice of {K_s: s∈ℤ} satisfying (<ref>) with α= 1/d.As M_(1 , 1 ),𝗌𝗉𝖺𝗋𝗌𝖾≲ 1, a corollary of Theorem <ref> is thatT_Ω,⋆_(1+ , 1+),𝗌𝗉𝖺𝗋𝗌𝖾≲1/ as well. The main result of <cit.> isthestronger controlsup_δ>0 T_Ω,δ_(1 , 1+),𝗌𝗉𝖺𝗋𝗌𝖾≲1/.The above estimate, in particular, is stronger that the uniform weak type (1,1) for the operators T_Ω,δ, a result originally due to Seeger <cit.>. As the weak type (1,1) of T_Ω,⋆ under no additional smoothness assumption on Ω is a difficult open question, estimating the (1,1+) sparse norm of T_Ω,⋆ as in (<ref>) seems out of reach.The study of sharp weighted norm inequalities for T_Ω,δ (the uniformity in δ is of course relevant here) was initiated inthe recent article <cit.> by Hytönen, Roncal and Tapiola. Improved quantificationshave been obtainedin <cit.> as a consequence of the domination result (<ref>), and further weighted estimates– including a Coifman-Fefferman type inequality, that is a norm control of T_Ω,δ by M on all L^p(w), 0<p<∞ when w∈ A_∞– have been later derived from (<ref>) in the recent preprint by the third named author, Pérez, Roncal and Rivera-Rios <cit.>. Although (<ref>) is a bit weaker than (<ref>), we see from comparison of (<ref>) from Theorem <ref> with the results of <cit.> that the quantification of the L^2(w)-norm dependence on[w]_A_2 entailed by the two estimatesis the same– quadratic; on the contrary, for p≠ 2, (<ref>) yields thebetter estimate T_Ω,δ_L^p(w)≲ [w]_A_p^p'. We also observe that the proof of the mixed estimate (<ref>) actually yields the following estimate for the non-maximally truncated operators, improving the previous estimate given in <cit.>T_Ω,δf_L^p(w)≲ [w]_A_q^1/p [w]_A_∞^1/p'f_L^p(w).Finally, we emphasize that (<ref>) also yields a precise dependence on p of the unweighted L^p operator norms. Namely, from the sparse domination, we get T_Ω,⋆_L^p(^d)→ L^p,∞(^d) ≲max{p,p'},T_Ω,⋆_L^p(^d)≲ pp'max{p,p'}with absolute dimensional implicit constant, which improves on the implicit constants in <cit.>. Moreover, we note that the main result of<cit.> implies that if (<ref>) is sharp, then our quantitative weighted estimate (<ref>) is also sharp.Comparing Theorem <ref> with the sparse domination formula for commutators of Calderón-Zygmund operators with BMO symbols <cit.>, all our weighted corollaries hold for commutators as well, with the help of John-Nirenberg inequality. §.§ Matrixweighted estimates for vector valued rough singular integralsLet (e_ℓ)_ℓ=1^n,ł· ,·_̊𝔽^n and |·|_𝔽^n be the canonical basis, scalar product and norm on 𝔽^n over 𝔽, where𝔽∈{,ℂ}. A recent trend in Harmonic Analysis– see, among others, <cit.>– is the study of quantitativematrix weighted norm inequalities for the canonical extension of the (integral) linear operator Tł T f(x),e_ℓ_̊𝔽^n:= ł T⊗Id_𝔽_n f(x),e_ℓ_̊𝔽^n =T (ł f, e_ℓ_̊𝔽^n) (x),x∈^d to 𝔽^n-valued functions f.In Section <ref> of this paper, we introduce anL^p, p>1,version of theconvex body averagesfirst broughtinto the sparse domination context by Nazarov, Petermichl, Treil and Volberg <cit.>, and use them to produce a vector valued version of Theorem <ref>. As a corollary, we obtain quantitative matrix weighted estimates for the maximal truncated vector valued extensionofthe rough singular integrals T_Ω,δ from (<ref>). In fact, the next corollary is a special case of the more precise Theorem <ref> from Section <ref>. LetWbe a positive semidefinite and locally integrable ℒ(𝔽^n)-valued function on ^d andT_Ω,δ be as in(<ref>). Thensup_δ>0|W^1/2T_Ω,δ f|_𝔽^n_L^2(^d)≲[W]_A_2^5/2|W^1/2f |_𝔽^n_L^2(^d)with implicitconstant depending on d,n only, where the matrix A_2 constant is given by [W]_A_2:=sup_Qcube of ^d(1/|Q|∫_Q W(x)x̣)^1/2(1/|Q|∫_Q W^-1(x)x̣)^1/2_ℒ(𝔽^n)^2. As the left hand side of (<ref>) dominates the matrix weighted norm of the vector valued maximal operator first studied by Christ and Goldberg in <cit.>, the finiteness of[W]_A_2 is actually necessary for the estimate to hold. To the best of the authors' knowledge, Theorem <ref>has no precedessors, in the sense that no matrix weighted norm inequalities for vector rough singular integrals were known before, even in qualitative form. At this time we are unable to assess whether the power 5/2 appearing in (<ref>) is optimal. For comparison, if the angular part Ω is Hölder continuous, the currently best known result <cit.> is that (<ref>) holds with power 3/2; see also <cit.>. §.§ Strategy of proof of the main results We will obtainTheorem <ref> by an application of an abstract sparse domination principle, Theorem <ref> from Section <ref>, which is a modification of <cit.>. At the core of our approach lies a special configuration of stopping cubes, the so-called stopping collections 𝒬, and their related atomic spaces. The necessary definitions, together with a useful interpolation principle for the atomic spaces, appear in Section <ref>. In essence, Theorem <ref>can be summarized by the inequalityT_⋆_(p_1,p_2),𝗌𝗉𝖺𝗋𝗌𝖾≲ T_⋆_ℒ(L^2(^d))+ sup_𝒬,t_1,t_2( 𝒬 _ t_1^t_2_𝒳̇_p_1×𝒴_p_2 + 𝒬 _ t_1^t_2_𝒴_∞×𝒳̇_p_2)where the supremum is taken over all stopping collections 𝒬 and all measurable linearizationsof the truncation parameters t_1,t_2, and 𝒬 _ t_1^t_2 are suitably adapted localizations of (the adjoint form to the linearized versions of) T_⋆. In Section <ref>, we prove the required uniform estimates for the localizations 𝒬 _ t_1^t_2 coming from Dini-smooth kernels.The proof of Theorem <ref> is given in Section <ref>, relying upon the estimates of Section <ref> and the Littlewood-Paley decomposition of the convolution kernels (<ref>) whose first appearance dates back to<cit.>. We remark that while this article was being finalized, an alternative proof of(<ref>) was given by Lerner <cit.>. It is of interest whether the strategy of <cit.>, relying on bumped bilinear grand local maximal functions, can be applied towards estimate (<ref>) as well.§.§ Notation With q'=q/q-1 we indicate the Lebesgue dual exponent to q∈ (1,∞), with the usual extension 1'=∞, ∞'=1. The center and the (dyadic) scale ofa cube Q∈^d will be denoted byc_Q ands_Q respectively, so that |Q|=2^ s_Q d. We use the notationM_p(f)(x) =sup_Q ⊂^dł f _̊p,Q1_Q(x)for thep-Hardy Littlewood maximal function and write M in place of M_1. Unless otherwise specified, the almost inequality signs ≲imply absolute dimensional constants which maybedifferent at each occurrence. §.§ AcknowledgmentsThis work was completed during F. Di Plinio's stay at the Basque Center for Applied Mathematics (BCAM), Bilbao as a visiting fellow. The author gratefully acknowledges the kind hospitality of the staff and researchers at BCAM and in particular of Carlos Pérez. The authors alsowant to thank José Conde-Alonso, Amalia Culiuc, Yumeng Ou and Ioannis Parissis for several inspiring discussions on sparse domination principles. § STOPPING COLLECTIONS AND INTERPOLATION IN LOCALIZED SPACES The notion of stopping collection 𝒬 with topthe (dyadic) cubeQ has been introduced in <cit.>, to which we send for details. Here, we recall that such a 𝒬 is a collection of pairwise disjoint dyadic cubes contained in 3Q and satisfying suitable Whitney type properties. More precisely,⋃_L∈𝖼𝒬 9L ⊂𝗌𝗁𝒬:=⋃_L∈𝒬 L ⊂ 3Q, 𝖼𝒬:= {L∈𝒬: 3L ∩ 2Q≠∅} ;L,L'∈𝒬, L ∩ L'≠∅ L=L';L∈𝒬, L' ∈ N(L) |s_L-s_L'|≤ 8,N(L):= { L'∈𝒬: 3L∩ 3L' ≠∅}.A consequence of (<ref>) is that the cardinality of N(L) is bounded by an absolute constant. The spaces 𝒴_p(𝒬), 𝒳_p(𝒬), 𝒳̇_p(𝒬) havealso been defined in <cit.>: here we recall that 𝒴_p(𝒬) is the subspace of L^p(^d) of functions satisfyingf ⊂ 3Q, ∞> f_𝒴_p(𝒬) :=max{f1_^d∖𝗌𝗁𝒬_∞ , sup_L∈𝒬 inf_x∈LM_p f(x) }p<∞ f _∞p=∞where L stands for the (non-dyadic) 2^5-fold dilate of L, and that𝒳_p(𝒬) is the subspace of 𝒴_p(𝒬) of functions satisfyingb=∑_L∈𝒬 b_L, b_L ⊂ L.Finally, we write b∈𝒳̇_p(𝒬) if b∈𝒳_p(𝒬) and each b_L has mean zero.We will omit (𝒬) from the subscript of the norms whenever the stopping collection 𝒬 is clear from context.There is a natural interpolation procedure involving the 𝒴_p-spaces. We do not strive for the most general result but restrict ourselves to proving a significant example, which is also of use to us in the proof of Theorem <ref>. Let B be a bisublinear formand A_1,A_2 be positive constants such that the estimates|B(b,f)| ≤ A_1 b_1_𝒳̇_1(𝒬) f_𝒴_1(𝒬) ,|B(g_1,g_2)| ≤ A_2 g_1_𝒴_2(𝒬) g_2_𝒴_2(𝒬)Then for all 0<< 1|B(f_1,f_2)| ≲ (A_1)^1- (A_2)^f_1_𝒳̇_p(𝒬) f_2_𝒴_p(𝒬) ,p=2/2-.We may assume A_2< A_1, otherwise there is nothing to prove. We are allowed to normalizeA_1=1.Fixing now 0<<1, so that 1<p <2, it will suffice to prove the estimate|B(f_1,f_2)| ≲(A_2)^for each pair f_1 ∈𝒳̇_p (𝒬),f_2∈𝒴_p(𝒬) with f_1_𝒳̇_p =f_2_𝒴_p =1 with implied constant depending on dimension only. Let λ≥ 1 to be chosen later. Using the notation f_>λ:=f1_|f|>λ, we introduce the decompositionsf_1= g_1+ b_1,b_1 :=∑_Q∈𝒬( (f_1)_>λ -1/|Q|∫_Q (f_1)_>λ) 1_Q,f_2= g_2+ b_2,b_2 :=(f_2)_>λwhich verify the propertiesg_1 ∈𝒳̇_2(𝒬), g_1_𝒳̇_p ≲ 1, g_1_𝒳̇_2 ≲λ^1-p/2, b_1∈𝒳̇_1(𝒬),b_1_𝒳̇_1≲λ^1-pg_2_𝒳̇_2 ≲λ^1-p/2, b_2_𝒳_1≲λ^1-p.We have used that b_1 is supported on the union of the cubes Q∈𝒬 and has mean zero on each Q, andthereforeg_1 has the same property,given that f_1 ∈𝒳̇_p (𝒬). Therefore|B(f_1,f_2)| ≤ |B(b_1,b_2)| + |B(b_1,g_2)| + |B(g_1,b_2)| + |B(g_1,g_2)|≤b_1_𝒳̇_1 b_2_𝒴_1 +b_1_𝒳̇_1 g_2_𝒴_1 + g_1_𝒳̇_1 b_2_𝒴_1 + A_2g_1_𝒴_2 g_2_𝒴_2 ≲λ^2-2p + 2λ^1-p + A_2 λ^2-p≲λ^2-2p(1+ A_2λ^p)which yields (<ref>) with the choice λ= A_2^-1/p. § A SPARSE DOMINATION PRINCIPLE FOR MAXIMAL TRUNCATIONS We consider families of functions [K]={K_s:s∈ℤ} satisfyingK_s ⊂{(x,y)∈^d×^d: |x-y|< 2^s} ,[K] :=sup_s∈ℤ 2^ sdsup_x∈^d(K_s(x, ·)_∞ + K_s(·,x)_∞)<∞and associate to them the linear operatorsT[K]f(x,t_1, t_2): = ∑_t_1<s≤ t_2∫_^d K_s(x,y)f(y)ỵ,x∈^d, t_1,t_2∈ℤand their sublinear maximal versionsT_⋆_t_1^t_2[K]f(x):=sup_t_1≤τ_1≤τ_2≤t_2|T[K]f(x,τ_1, τ_2)|,T_⋆[K]f(x) = sup_t_1≤ t_2|T[K]f(x,t_1, t_2)|.We assume that there exists 1<r<∞ such that[K]_r,⋆ :=T_⋆[K]_L^r(^d)<∞.For pairs of bounded measurable functions t_1,t_2:^d→ℤ, we also consider the linear operatorsT[K]_t_1^t_2 f(x) := T[K]f(x,t_1(x), t_2(x)),x∈^d.From the definition (<ref>), it follows thatt_1,t_2 ∈ℤ,t_1≥ t_2T[K]f(x,t_1, t_2)=0 .In consequence, for the linearized versions defined in (<ref>) we havesuppT[K]_t_1^t_2 f ⊂{x∈^d: t_2(x)-t_1(x)>0}.A related word on notation: we will be using linearizations of the type T[K]_t_1^s_Q and similar, where s_Q is the (dyadic) scale of a (dyadic) cube Q. With this we mean we are using the constant function equal to s_Q as our upper truncation function. Finally, we will be using the notationst_2∧ s_Q for the linearizing function x↦min{t_2(x),s_Q} and t_1∨ s_L for the linearizing function x↦max{t_1(x),s_L}.Given two bounded measurable functions t_1,t_2 and a stopping collection 𝒬 with top Q, we define the localized truncated bilinear forms𝒬 [K]_ t_1^t_2(f_1,f_2):= 1/|Q|[ ł T[K]^t_2∧ s_Q_t_1(f_11_Q),f_2-̊∑_L∈𝒬L⊂ Qł T[K]^t_2∧ s_L_t_1(f_11_L),f_2]. Note that we have normalized by the measure of Q, unlike the definitions in <cit.>. Observe that as a consequence of the support assumptions in (<ref>) and of the largest allowed scale being s_Q, we have𝒬 [K]_ t_1^t_2(f_1,f_2) = 𝒬 [K]_ t_1^t_2(f_11_Q,f_21_3Q).Similarly we remark that T[K]^t_2∧ s_L_t_1(f 1_L) is supported on the set 3L∩{x∈^d: s_L-t_1(x)>0}; see Remark <ref>.Within the above framework, we have the following abstract theorem.Let [K]={K_s:s∈ℤ} be a family of functions satisfying(<ref>) and (<ref>) above. Assume that there exist 1≤ p_1,p_2<∞ such thatsup_b_𝒳̇_p_1(𝒬)=1f_𝒴_p_2(𝒬)=1|𝒬 [K]_ t_1^t_2(b,f)|+ sup_f_𝒴_∞(𝒬)=1b_𝒳̇_p_2(𝒬)=1 |𝒬 [K]_ t_1^t_2(f,b)|=: C_𝖫[K](p_1,p_2)<∞.hold uniformly over all bounded measurable functions t_1,t_2,and all stopping collections 𝒬.ThenT_⋆[K] _(p_1,p_2),𝗌𝗉𝖺𝗋𝗌𝖾≲[K]_r,⋆ +C_𝖫[K](p_1,p_2). The proof follows essentially the same scheme of<cit.>; for this reason, we limit ourselves to providing an outline of the main steps. §.§.§ Step 1. Auxiliary estimateFirst of all, an immediate consequence of the assumptions of the Theorem is thatthe estimate| 𝒬 [K]_ t_1^t_2(f_1,f_2)| ≤ C Θ_[K],p_1,p_2f_1_𝒴_ p_1(𝒬) f_2_𝒴_p_2(𝒬)where Θ_[K],p_1,p_2:= [K]_r,⋆ +C_𝖫[K](p_1,p_2), holds withC>0uniform over bounded measurable functions t_1,t_2. See <cit.>. Therefore,|ł T[K]^t_2∧ s_Q_t_1(f_11_Q),f_2| ≤ C Θ_[K],p_1,p_2 |Q|f_1_𝒴_ p_1(𝒬) f_2_𝒴_p_2(𝒬)+ ∑_L ∈𝒬L ⊂ Q|ł T[K]^t_2∧ s_L_t_1(f_11_L),f_2| §.§.§ Step 2. InitializationThe argument begins as follows. Fixing f_j∈ L^p_j(^d),j=1,2 with compact support, we may findmeasurable functions t_1,t_2 which are bounded above and below and a large enough dyadic cube Q_0 from one of the canonical 3^d dyadic systems such that f_1⊂ Q_0, f_2⊂ 3Q_0 andł T_⋆[K] f_1 ,|f_2|≤̊2 |ł T[K]^t_2∧ s_Q_0_t_1(f_11_Q_0),|f_2||and we clearly can replace f_2 by |f_2| in what follows.§.§.§ Step 3. Iterative processThen, the argument proceeds via iteration over k of the following construction, which follows from (<ref>) and the Calderón-Zygmund decomposition and is initialized by taking 𝒮_k ={Q_0} for k=0. Given a disjoint collection of dyadic cubes Q∈𝒮_k with the further Whitney propertythat (<ref>) holds for 𝒮_k in place of 𝒬,there exists a further collection of disjoint dyadic cubes L∈𝒮_k+1 such that * (<ref>) for 𝒮_k in place of 𝒬 continues to hold, * each subcollection 𝒮_k+1(Q)={L∈𝒮_k+1: L⊂ 3Q} is a stopping collection with top Q,and for which for all Q∈𝒮_k there holds|ł T[K]^t_2∧ s_Q_t_1(f_11_Q),f_2|≤ C|Q| Θ_[K],p_1,p_2ł f_1 _̊p_1,3Qł f_2 _̊p_2,3Q + ∑_L ∈𝒮_k+1(Q)L ⊂ Q|ł T[K]^t_2∧ s_L_t_1(f_11_L),f_2|. More precisely, 𝒮_k+1 is composed by the maximal dyadic cubes L such that 9L⊂⋃_Q ∈𝒮_k E_Q,E_Q:={ x ∈ 3Q: max_j=1,2M_p_j(f_j1_3Q) (x)/⟨ f_j _̊p_j,3Q > C}for a suitably chosen absolute large dimensional constant C. This construction,as well as the Whitney property (<ref>) results into| Q ∩⋃_L ∈𝒮_k+1 L |= | Q ∩⋃_Q' ∈𝒮_k: Q' ∈ N(Q)E_Q'|≤1/2 |Q| ∀ Q ∈𝒮_k,k=0,1,…guaranteeing that 𝒯_k:=∪_κ=0^k𝒮_κ is a sparse collection for all k. When k=k̅ is such that inf{ s_Q:Q ∈𝒮_k̅} < inf t_1, the iteration stops and the estimateł T_⋆[K] f_1 ,f_2≲̊Θ_[K],p_1,p_2∑_Q ∈𝒯_k̅ |Q|ł f_1 _̊p_1,3Qł f_2 _̊p_2,3Qis reached. This completes the proof of Theorem <ref>.§ PRELIMINARY LOCALIZED ESTIMATES FOR THE TRUNCATED FORMS (<REF>) We begin by introducing our notation for the Dini constant of a family of kernels [K] as in (<ref>). We write[K]_𝖣𝗂𝗇𝗂:=[K]+∑_j=0^∞ϖ_j([K])whereϖ_j([K]) := sup_s∈ℤ 2^sdsup_x∈^dsup_h∈^d|h|<2^s-j-3([ K_s(x,x+·)- K_s(x+h,x +·)_∞; + K_s(x+·,x)-K_s(x+·,x+h)_∞ ])The estimates contained within the lemmata that follow are meant to be uniformover all measurable functions t_1,t_2 and allstopping collections𝒬. The first one is an immediate consequence of the definitions: for a full proof, see<cit.>. Let 1<r<∞. Then|𝒬 [K]_ t_1^t_2(f_1,f_2) |≲[K]_r,⋆f_1_𝒴_r(𝒬)f_2_𝒴_r'(𝒬) The second one is a variant of <cit.>; we provide a full proof. There holds|𝒬 [K]_ t_1^t_2(b,f)|≲[K]_𝖣𝗂𝗇𝗂b_𝒳̇_1(𝒬)f_𝒴_1(𝒬) .We consider the family K fixed and use the simplified notation 𝒬_ t_1^t_2 in place of 𝒬 [K]_ t_1^t_2, and similarly for the truncated operators T[K]. By horizontal rescaling we can assume |Q|=1. Let b ∈𝒳̇_1. Recalling the definition (<ref>) and using bilinearity of 𝒬_ t_1^t_2 it suffices for each stopping cube R∈𝒬 to prove that|𝒬 _ t_1^t_2(b_R,f)| ≲[K]_𝖣𝗂𝗇𝗂b_R_1f_𝒴_1as b_R_1 ≲ |R| b_𝒳̇_1, and conclude bysumming up over the disjoint R∈𝒬, whose union is contained in 3Q. We may further assume R⊂ Q; otherwise 𝒬 _ t_1^t_2(b_R,f)=0. In addition we can assume f is positive, by repeating the same argument below with the real and imaginary, and positive and negative parts of f.Using the definition of the truncated forms (<ref>) and the disjointness of L ∈𝒬, |𝒬 _ t_1^t_2(b_R,f) |= |ł T^t_2∧ s_Q_t_1(b_R) - T^ t_2∧ s_R_t_1(b_R),f| =| ł T^t_2∧ s_Q_t_1∨ s_R(b_R),f|≤łT_⋆_s_R^s_Qb_R,f.̊Thus, if R_s denotes the cube concentric to R and whose sidelength is 2^10+s, using the support conditions and abbreviating a standard calculation|𝒬 _ t_1^t_2(b_R,f)|≤łT_⋆_s_R^s_Qb_R,f≤∑_s=s_R+1^s_Q∫_R_s| ∫_R K_s(x,y) b_R(y) ỵ|f(x)x̣≲[K]_𝖣𝗂𝗇𝗂b_R_𝒳̇_1sup_jł f _̊1,R_jwhich is bounded by the right hand side of (<ref>). The third localized estimate isnew. However, its roots lie in the well-known principle that the maximal truncations of a Dini-continuous kernel to scales larger than s do not oscillate too much on a ball of radius 2^s, see (<ref>). This was recently employed, for instance, in <cit.>. There holds|𝒬 [K]_ t_1^t_2(f,b)|≲( [K]_𝖣𝗂𝗇𝗂∨K_r,⋆) f_𝒴_∞(𝒬)b_𝒳_1(𝒬). We use similar notation as in the previous proof and again we rescale to|Q|=1, andwork with positive b ∈𝒳_1. We can of course assume that suppf⊂ Q. We begin by removing an error term; namely, referring to notation (<ref>), ifb_𝗈 =∑_R∉𝖼𝒬 b_Rthen|𝒬 [K]_ t_1^t_2(f,b_𝗈)| ≤ł |Tf(·,s_Q-1,s_Q)| ,b_𝗈≲̊[K]_b_𝒳_1f_𝒴_∞The first inequality holds because dist(supp f,supp b_𝗈) > 2^s_Q-1, so at most the s_Q scale may contribute, and in particular no contribution comes from cubes L⊊ Q. The second inequality is a trivial estimate, see <cit.> for more details. Thus we may assume b_R=0 whenever R∉𝖼𝒬. We begin the main argument by fixing R ∈𝖼𝒬. Then by support considerationsł T^t_2∧ s_L_t_1(f1_L),b_R≠̊0L∈ N(R).Similarly, ł T^t_2∧ s_R_t_1f ,b_R =̊ł T^t_2∧ s_R_t_1(f1_𝗌𝗁𝒬) ,b_R =̊∑_L∈ N(R)L⊂ Qł T^ t_2∧s_R_t_1(f1_L),b_R.̊In fact,using (<ref>) we learn that dist(𝗌𝗁𝒬,R) > 2^s_R, whence the first equality. Therefore, subtracting and adding the last display to obtain the second equality, 𝒬 _ t_1^t_2(f,b_R)= łT^t_2∧ s_Q_t_1f,b_R-̊∑_L∈ N(R)L⊂ QłT^ t_2∧s_L_t_1(f1_L),b_R= łT^t_2∧ s_Q_t_1∨ s_Rf,b_R-̊∑_L∈ N(R)L⊂ Qsign(s_L-s_R) ł T^ t_2∧ (s_L ∨ s_R)_t_1∨ (s_L ∧ s_R)(f1_L),b_R.̊Now, the summation in the above display is then bounded in absolute value by∑_L∈ N(R)łT_⋆_s_L ∨ s_R^s_L ∧ s_R (f 1_L),b_R ≲̊[K]∑_L∈ N(R)b_R_1 ł f _̊1,L≲|[K]_𝖣𝗂𝗇𝗂b_R_1 f_𝒴_∞,using that |s_L-s_R|≤ 8 whenever L∈ N(R). Therefore when R∈𝖼𝒬|𝒬 _ t_1^t_2(f,b_R)|≤| łT^t_2∧ s_Q_t_1∨ s_Rf,b_R|+ C [K]_𝖣𝗂𝗇𝗂b_R_1 f_𝒴_∞with absolute constant C. Now, define the functionF(x) = sup_s_R≤τ_1≤τ_2≤ s_Q |Tf(x,τ_1,τ_2)| x∈ R ∈𝖼𝒬0 x ∉⋃_R∈𝖼𝒬 Rand notice that |T^t_2∧ s_Q_t_1∨ s_Rf|≤ F on R∈𝖼𝒬. Since b is positive, using (<ref>), summing (<ref>) over R∈𝖼𝒬 and using that this is a pairwise disjoint collection, we obtain that|𝒬 _ t_1^t_2(f,b)|≤ |𝒬 _ t_1^t_2(f,b_𝗈)| + ∑_R∈𝖼𝒬 |𝒬 _ t_1^t_2(f,b_R)|≤C [K]_𝖣𝗂𝗇𝗂b_𝒴_1f_𝒴_∞ +∑_R∈𝖼𝒬|łT^t_2∧ s_Q_t_1∨ s_Rf,b_R|≤ C[K]_𝖣𝗂𝗇𝗂b_𝒳_1f_𝒴_∞ +∑_R∈𝖼𝒬ł F,b_R=̊ C [K]_𝖣𝗂𝗇𝗂b_𝒳_1f_𝒴_∞ + łF,b.̊Therefore, we are left with bounding ł F,b $̊. This is actually done using both theL^restimate and the Dini cancellation condition. In fact, decomposeb= g + z,g=∑_R∈𝖼𝒬 g_R: = ∑_R∈𝖼𝒬ł b _̊1,R1_R,z=∑_R∈𝖼𝒬 z_R: = ∑_R∈𝖼𝒬(b- ł b_̊1,R)1_Rso thatg_𝒴_∞≤b_𝒳_1, z_𝒳̇_1≤ 2 b_𝒳_1Thenł F,g≤̊ł T_⋆ f, g≤̊[K]_r,⋆f_rg_r'≤[K]_r,⋆g_𝒴_∞f_𝒴_∞≤[K]_r,⋆f_𝒴_∞b_𝒳_1and we are left to control|łF, z|̊. We recall from <cit.> the inequality|Tf(x,τ_1,τ_2)-Tf(ξ,τ_1,τ_2)| ≲[K]_𝖣𝗂𝗇𝗂sup_s≥ s_Rł f _̊1,R_s,x,ξ∈ R, τ_2 ≥τ_1≥ s_RwhereR_sis the cube concentricwithRand sidelength2^s, whence for suitable absolute constantCF(x)≤ F(ξ)+C [K]_𝖣𝗂𝗇𝗂f_𝒴_1,x,ξ∈ Rand takingaverages there holdssup_x∈ R|F(x) - ł F _̊1,R|≲[K]_𝖣𝗂𝗇𝗂f_𝒴_1.Finally, using the above display and the fact that eachz_Rhas zero average and is supported onR,|ł F, z|̊ ≤∑_R∈𝒬 |łF,z_R|̊ = ∑_R∈𝖼𝒬 |łF-ł F _̊1,R1_R,z_R|̊≲[K]_𝖣𝗂𝗇𝗂f_𝒴_1∑_R∈𝖼𝒬z_R_1 ≤[K]_𝖣𝗂𝗇𝗂f_𝒴_∞b_𝒳_1and collecting (<ref>), (<ref>) and (<ref>) completes the proof of the Lemma.By Proposition <ref> applied to the forms𝒬 _ t_1^t_2[K], we may interpolatethe bound of Lemma <ref> with the one of Lemma <ref> withr=2. A similar but easier procedure allows to interpolate Lemma <ref> with Lemma <ref> withr=2. We summarize the result of such interpolations in the following lemma.For 0≤≤ 1 and p=2/2- there holdsC_𝖫[K](p,p)≲(|[K]_𝖣𝗂𝗇𝗂∨[K]_2,⋆)^1-([K]_2,⋆)^where C_𝖫[K](p_1,p_2) is defined in (<ref>).Let T be an L^2(^d)-bounded singular integral operator withDini-continuous kernel K. Then its maximal truncations obey the estimateT_⋆ f(x):= sup_δ>0| ∫_δ<|h|<1/δ K(x,x+h) f(x+h)ḥ|≲M f(x) + T_⋆[K] f(x), withthe family [K]:={K_s:s∈ℤ}defined byK_s(x,x+h) := K(x,x+h) ψ (2^-sh ),x,h ∈^dwhere the smoothradial function ψ satisfiesψ⊂{h∈^d:2^-2<|h|<1}, ∑_s∈ℤψ (2^-sh )=1, h≠ 0.We know from classical theory<cit.> that [K]_2,⋆≲T_L^2(^d)+ K_𝖣𝗂𝗇𝗂. Therefore, in consequence of (<ref>) andof the bound M_(1,1),𝗌𝗉𝖺𝗋𝗌𝖾≲ 1, an application of Theorem <ref> in conjunction with Lemmata <ref> and <ref> yields thatT_⋆_(1,1),𝗌𝗉𝖺𝗋𝗌𝖾≲T_L^2(^d)+ K_𝖣𝗂𝗇𝗂.Thisis a well-known result. The dual pointwise version wasfirst obtainedin this form in <cit.> quantifying the initial result of Lacey <cit.>; see also <cit.>. An extension to multilinear operators with less regular kernels was recently obtained in <cit.>. § PROOF OF THEOREM <REF>In this section, we will prove Theorem <ref> by appealing to Theorem <ref> for the family[K]={(x,y)↦K_s(x-y):s∈ℤ}of (<ref>). First of all, we notice that the assumption (<ref>) is a direct consequence of (<ref>). It is known from e.g. <cit.> (and our work below actually reproves this) that, with reference to (<ref>) T_⋆[K]_L^2(^d)≲ 1,which is assumption (<ref>) withr=2. Therefore, for an application of Theorem <ref> withp_1=p_2=p=2/2-,0<<1 we are left with verifying the corresponding stopping estimates (<ref>) hold withC_𝖫[K](p,p) ≲^-1. We do so by means of a Littlewood-Paley decomposition, as follows. Letφbe a smooth radial function on^dwith support in a sufficiently small ball containing the origin, having mean zero and such that∑_k=-∞^∞φ_k( ξ) =1, ∀ξ≠ 0, φ_k(·):= 2^-kdφ( 2^-k·).Also defineϕ_k(·) :=∑_ℓ≥ kφ_ℓ(·), K_s,0= K_s *ϕ_s , K_s,j= ∑_ℓ=Δ(j-1)+1^Δ jK_s*φ_s-ℓ,j≥ 1.for some large integerΔwhich will be specified during the proof. Unless otherwise specified, the implied constants appearing below are independent ofΔbut may depend onα>0from (<ref>) and on the dimension. Note thatK_s,jare supported in{|x|<2^s}. Define now for allj≥0[K^j]={ (x,y)↦ K_s,j(x-y):s ∈ℤ}and note that, with unconditional convergenceK_s(y) = ∑_j=0^∞ K_s,j(y),y ∈^d.The following computation is carried out in <cit.>.There holdsϖ_ℓ([K^j]) ≲min{1, 2^Δ j-ℓ} and as a consequence [K^j]_𝖣𝗂𝗇𝗂≲1+Δ j for all j≥ 0. It is also well-known thatsup_t_1, t_2∈ℤ f↦ T[K^j]f(·,t_1,t_2)=∑_t_1<s≤ t_2 K_s,j*f _L^2(^d)≲ 2^-αΔ(j-1);however, we need a stronger estimate on the pointwise maximal truncations, which is implicit in <cit.>. There holds [K^0]_2,⋆≲ 1, [K^j]_2,⋆≲ 2^- α/2Δ (j-1),j≥ 1.Let β be a smooth compactly supported function on ^d normalized to have łβ,1=̊1, and write β_s(·)=2^-sdβ(2^-s·). By usual arguments it suffices to estimate the L^2(^d) operator norm off↦sup_t_1≤ s≤ t_2 T[K^j]f(·, s,t_2)uniformly over t_1,t_2 ∈ℤ. We then haveT[K^j]f(·, s,t_2) = β_s *(∑_t_1<k≤ t_2K_k * (ϕ_k-Δ j-ϕ_k-Δ(j-1)))*f- β_s*(∑_t_1<k≤ sK_k * (ϕ_k-Δ j-ϕ_k-Δ(j-1)))*f+ (δ- β_s)*(∑_s<k ≤ t_2K_k * (ϕ_k-Δ j-ϕ_k-Δ(j-1)))*f=:I_1,s+I_2,s+I_3,s,For I_1,s, by (<ref>) we havesup_t_1≤ s ≤ t_2 |I_1,s|_2≲M(T[K^j] f(·, t_1,t_2))_2≲2^-αΔ(j-1)f_2. Next we estimate the second and third contribution in (<ref>). We have, using the third assertion in(<ref>), that|∑_t_1<k≤ sβ̂(2^s ξ) K̂_k(ξ)(ϕ̂(2^k-Δ jξ)-ϕ̂(2^k-Δ (j-1)ξ))|≲|∑_t_1<k≤ smin{1, |2^s ξ|^-1}·min{|2^k ξ|^α, |2^k ξ|^-α}·∑_ℓ=Δ(j-1)^Δ jmin{|2^k-ℓξ|, |2^k-ℓξ|^-1}|≲ 2^-αΔ(j-1) |2^s ξ|^-1, |2^s ξ|>1,2^-Δ(j-1) |2^s ξ|, |2^sξ|≤ 1A similar computation reveals|∑_k≥ s (1- β̂(2^s ξ))K̂_k(ξ) ( ϕ̂(2^k-Δ jξ)-ϕ̂(2^k-Δ(j-1)ξ))|≤ 2^-αΔ (j-1)/2min{|2^sξ|, |2^sξ|^-α/2}.Thus by Plancherel, for m=2,3 we have sup_t_1≤ s≤ t_2 |I_m,s| _2≤(∑_s=t_1^t_2 |I_m,s|^2)^1/2_2≲ 2^-αΔ (j-1)/2f_2and the proof of the Lemma is completed by putting together (<ref>)–(<ref>).We are now ready to verify the assumptions (<ref>) for the truncated forms𝒬[K]_t_1^t_2associated to a family[K]satisfying the assumptions (<ref>). By virtue of Lemma <ref> and <ref>, Lemma <ref> applied to the families[K^j]for the valueΔ=2^-1α^-1yields thatC_𝖫[K^j](p,p)≲( [K^j]_𝖣𝗂𝗇𝗂∨ |[K^j]_2,⋆)^1-([K^j]_2,⋆)^≲ (1+Δ j)^1-2^- α/2Δ (j-1) ≲^-1(1+ j)2^-j.Therefore using linearity in the kernel family[K]of the truncated forms𝒬_t_1^t_2[K] and the decomposition (<ref>)-(<ref>) C_𝖫[K](p,p)≤∑_j=0^∞ C_𝖫[K^j](p,p) ≲^-1which, together with the previous observations, completes the proof of Theorem <ref>.§ EXTENSION TO VECTOR-VALUED FUNCTIONSIn this section, we suitably extend the abstract domination principleTheorem <ref> to (a suitably defined)𝔽^n-valued extension, with𝔽∈{,ℂ}, of the singular integrals of Section <ref>. In fact, theℂ^n-valued casecan be recovered by suitable interpretation of the^2n-valued one; thus, it suffices to consider𝔽=. §.§ Convex body dominationLet1≤p<∞. To eachf∈L^p_loc(^d; ^n)and each cubeQin^d,we associate the closed convex symmetric subset of^nł f _̊p,Q:= {1/|Q|∫_Q f φ x̣: φ∈Φ_p'(Q)}⊂^n,where we used the notationΦ_q(Q):={φ: Q→, łφ_̊q,Q≤1 }.It is easy to see thatsup_ξ∈ł f _̊p,Q |ξ | ≤ł |f|_^n_̊p,Qwhereł·_̊p,Qon the right hand side is being interpreted in the usual fashion. A slightly less obvious fact that we will use below is recorded in the following simple lemma, which involves the notion of John ellipsoid of a closed convex symmetric setK. This set, which we denote byℰ_K, stands forthe solid ellipsoid of largest volume contained inK; in particular, the John ellipsoid ofKhas the property thatℰ_K ⊂ K ⊂√(n)ℰ_Kwhere, ifA⊂^nandc≥0, bycAwe mean the set{ca: a ∈A}. We also apply this notion in the degenerate case as follows: if the linear span of K is a k-dimensional subspace V of ^n, we denote by ℰ_K the solid ellipsoid of largest k-dimensional volume contained in K. In this case, (<ref>) holds with √(k) in place of √(n), but then it also holds as stated, since k≤ n and ℰ_K is also convex and symmetric. Let f=(f_1,…,f_n)∈ L^p_loc(^d; ^n) and suppose that ℰ_ł f_̊p,Q=B_1, where B_ρ={a∈^n: |a|_^n≤ρ}. Thensup_j=1,…, Nł f_j _̊p,Q≤√(n).By definition of ł·_̊p,Q for scalar functions, ł f_j _̊p,Q= sup_φ∈Φ_p'(Q)1/|Q|∫_Q f_j φ x̣ =1/|Q|∫ f_j φ_⋆ x̣for a suitable φ_⋆∈Φ_p'(Q). Thus ł f_j _̊p,Q is the j-th component of the vectorf_φ_⋆=1/|Q|∫_Q f_j φ_⋆ x̣∈ł f _̊p,Q.By (<ref>), and in consequence of the assumption,f_φ_⋆∈ B_√(n), which proves the assertion. We now define the sparse(p_1,p_2)norm of a linear operatorTmapping the spaceL^∞_0(^d; ^n)intolocally integrable,^n-valued functions, as the least constantC>0such that for each pairf_1,f_2∈L^∞_0(^d; ^n)we may find a1/2-sparse collection𝒮such that |ł Tf_1,f_2 |̊≤ C ∑_Q∈𝒮 |Q| ł f_1 _̊p_1,Qł f_2 _̊p_2,Q.We interpret the rightmost product in the above display asthe right endpoint of the Minkowski productAB={ła,b_̊^n: a∈A, b∈B}of theclosed convex symmetric setsA,B⊂^n, which is a closed symmetric interval. We use the same familiar notationT_(p_1,p_2),𝗌𝗉𝖺𝗋𝗌𝖾for such norm.Within such framework, we have the following extension of Theorem <ref>. Let [K]={K_s:s∈ℤ} be a family of real-valued functions satisfying(<ref>), (<ref>), and (<ref>) for some 1<r<∞, 1≤ p_1,p_2<∞. Then the ^n-valued extension of thelinearized truncations T[K]^t_2_t_1 definedin(<ref>) admits a (p_1,p_2) sparse bound, namelyT[K]_t_1^t_2⊗Id_^n_(p_1,p_2),𝗌𝗉𝖺𝗋𝗌𝖾≲[K]_r,⋆ +C_𝖫[K](p_1,p_2)with implicit constant possibly depending on r,p_1,p_2 and the dimensions d,n only, and in particular uniform over bounded measurable truncation functions t_1,t_2. The objects (<ref>) for p=1 have been introducedin this context by Nazarov, Petermichl, Treil and Volberg <cit.>, wheresparse domination of vector valued singular integrals bythe Minkowski sum of convex bodies (<ref>) is employed towards matrix-weighted norm inequalities.In <cit.>, a similar result, but in the dual form (<ref>) with p_1=p_2=1 is proved for dyadic shifts via a different iterative technique which is a basic version of the proof of Theorem <ref>. Subsequentdevelopments in vector valued sparse domination include the sharp estimate for the dyadic square function <cit.>. The usage of exponents p>1 in (<ref>),necessary to effectively tacklerough singular integral operators, is a novelty of this paper. §.§ Matrix-weighted norm inequalitiesWe now detail an application of Theorem <ref> to matrix-weighted norm inequalities formaximally truncated, rough singular integrals. In particular,Corollary <ref> from the introduction is a particular case of Theorem <ref> below. The classes of weights we are concerned with are the following.A pair of matrix-valued weightsW,V∈L^1_loc(^d; ℒ(^n))is said to satisfy the (joint) matrix A_2 condition if [W,V]_A_2:=sup_Q√(W_Q)√(V_Q)_ℒ(^n)^2<∞supremum being taken over all cubesQ⊂^dand W_Q:= 1/|Q|∫_Q W(x)x̣∈ℒ(^n).We simply write[W]_A_2:=[W,W^-1]_A_2. We furtherintroduce a directional matrixA_∞condition, namely [W]_A_∞:= sup_ξ∈ S^n-1 [ ł W ξ,ξ_̊^n]_A_∞≤sup_ξ∈ S^n-1 [ ł W ξ,ξ_̊^n]_A_2≤ [W]_A_2.where the second inequality is the content of<cit.>. LetW,V∈ L^1_loc(^d; ℒ(^n)) be a pair of matrix weights, and T_Ω,δ be defined by (<ref>), with in particular Ω_∞≤ 1. Thensup_δ>0| W^1/2 T_Ω,δ (V^1/2f ) |_^n_L^2(^d)≲max{[W]_A_∞, [V]_A_∞}√([W,V]_A_2 [W]_A_∞ [V]_A_∞)f _L^2(^d; ^n). We now explain how an application of Theorem <ref> reduces Theorem <ref> to a weighted square function-type estimate forconvex-body valued sparse operators. First of all, fixfof unit norm inL^2(^d; ^n). We may then findgof unit norminL^2(^d; ^n)and bounded measurable functionst_1,t_2such that the left hand side of (<ref>) is bounded by twice the sum of|⟨ T[K]_t_1^t_2⊗Id_^n (V^1/2f), W^1/2 g ⟩|andx↦sup_Q x⟨| W^1/2(x)V^1/2f|_^n⟩_Q_L^2(^d)≤x↦sup_Q x| W^1/2(x)ł V_̊Q^1/2|_ℒ(^n)⟨| ł V_̊Q^-1/2V^1/2f|_^n⟩_Q_L^2(^d),where[K]is a suitable decomposition satisfying (<ref>). The latter expression is (the norm of) a two-weight version of the matrix weighted maximal function of Christ and Goldberg <cit.>. In the one weight case when V=W^-1∈ A_2, its boundedness has been proved in <cit.> and quantified in <cit.>, which contains the explicit boundc_d,n[W]_A_2f_L^2(^d;^n)and the implicit improvementc_d,n[W]_A_2^1/2[W^-1]_A_∞^1/2f_L^2(^d;^n),where c_d,n is a dimensional constant. A straightforward modification of the same argument, using the splitting on the right of the previous display, gives the bound[W,V]_A_2^1/2[V]_A_∞^1/2f_L^2(^d;^n)in the two weight case. Roughly speaking, the first factor is controlled by the two weight A_2 condition and the second one by the A_∞ property of V. By virtue of the localized estimate (<ref>) for[K], an application of Theorem <ref> tells us that(<ref>) is bounded byC/timesa sparse sublinear form as in(<ref>) withp_1=p_2=1+,f_1=V^1/2fandf_2=W^1/2 gfor all>0. Finally, we gather that sup_δ>0| W^1/2 T_Ω,δ (V^1/2f ) |_^n_2≲√([V,W]_A_2max{[W]_A_∞, [V]_A_∞}) + inf_>0sup_𝒮1/∑_Q∈𝒮 |Q| ł V^1/2f_̊1+,Qł W^1/2 g _̊1+,Qwhere the supremum is being taken over1/2-sparse collections𝒮, and the proof of Theorem <ref> is completed by the following proposition. The estimate inf_>01/∑_Q∈𝒮 |Q| ł V^1/2f_̊1+,Qł W^1/2 g _̊1+,Q≲max{[W]_A_∞, [V]_A_∞}√([W,V]_A_2 [W]_A_∞ [V]_A_∞)holds uniformly over all f,g of unit norm in L^2(^d;^n)and all 1/2-sparse collections 𝒮.There is no loss in generality with assuming that the sparse collection 𝒮 is a subset of a standard dyadic grid in ^d, and we do so. Fix >0. By standard reductions, we have thatsup_f_j_L^2(^d;^n)=1∑_Q∈𝒮 |Q| ł V^1/2f_1_̊1+,Qł W^1/2 f_2 _̊1+,Q≲√([W,V]_A_2)sup_f_j_L^2(^d;^n)=1S_V, f_1_2 S_W, f_2_2having defined the square functionS_W, f^2 =∑_Q∈𝒮ł|(W_Q)^-1/2 W^1/2||f|_^n_̊1+,Q^21_Q.Now, if<2^-10t_W, t_W:=(2^d+N+11+[W]_A_∞)^-1, p:=1+2t/(1+) (1+t)∈ (1,2) as a result of the sharp reverse Hölder inequality and of the Carleson embedding theorem there holds S_W, f_2^2 ≲∑_Q ∈ S|Q| ł|f|_^n_̊2/p, Q^2 ≲ (p')^p |f|_^n_2^2 ≲ [W]_A_∞|f|_^n_2^2cf. <cit.>, and a similar argument applies to S_V,. Therefore, for <2^-10min{t_W,t_V}, (<ref>) turns intosup_f_j_L^2(^d;^n)=1∑_Q∈𝒮 |Q| ł V^1/2f_1_̊1+,Qł W^1/2 f_2 _̊1+,Q≲√([W,V]_A_2 [W]_A_∞ [V]_A_∞)which in turn proves Proposition <ref>.We may derive a slightly stronger weighted estimate than (<ref>) for the non-maximally truncated rough integrals T_Ω,δ, by applying Theorem <ref> in conjunction with the (1,1+) localized estimates proved in <cit.>. Namely, the estimateW^1/2 T_Ω,δ (V^1/2f )_L^2(^d;^n)≲sup_g_L^2(^d;^n)=1inf_>0sup_𝒮1/∑_Q∈𝒮 |Q| ł V^1/2f_̊1,Qł W^1/2 g _̊1+,Q.holds uniformly in δ>0 for all f of unit norm in L^2(^d; ^n). Repeating theproof of Proposition <ref> then yields the slightly improved weighted estimatesup_δ>0 W^1/2 T_Ω,δ (V^1/2f ) _L^2(^d;^n)≲min{[W]_A_∞, [V]_A_∞}√([W,V]_A_2 [W]_A_∞ [V]_A_∞)f _L^2(^d; ^n).§.§ Proof of Theorem <ref>The proof of Theorem <ref> is formally identical to the argument for the scalar valued case, provided thatestimate (<ref>) and the definition ofE_Qgiven in (<ref>) are replaced by suitable vector valued versions. We begin with the second tool. The proof, which is a minor variation on<cit.>, is given belowLet 0<η≤ 1,Q be a dyadic cube and f_j∈ L^p_j(^d; ^n), j=1,2.Then the setE_Q:=⋃_j=1^2{ x ∈ 3Q: ηł f_j1_3Q_̊p_j, L⊄ł f_j _̊p_j,3Qfor some cube L⊂^dwithx ∈ L }satisfies |E_Q|≤ C η^min{p_1,p_2} |Q| for some absolute dimensional constant C.We may assume that f_j⊂ 3Q. It is certainly enough to estimate the measure of each j∈{1,2} component of E_Q by Cη^p_j|Q|, and we do so: we fix j andare thus free to write f_j=f,p_j=p. Let ℒ_f={ L⊂^d: ηł f_̊p, L⊄ł f _̊p,3Q}.By usual covering arguments it suffices to show that if L_1,… L_m∈ℒ_f are disjoint then∑_μ=1^m |L_μ| ≤ C η^p|Q|.Fix such a disjoint collection L_1,… L_m. Notice that if A∈GL(^n) then ℒ_Af=ℒ_f.By action of GL(^n) we may thus reduce to the case where ℰ_ł f _̊p,3Q=B_1, and in particularB_1 ⊂ł f _̊p,3Q⊂ B_√(n). By membership of each L_μ∈ℒ_f, we know that ηł f _̊p,L_μ⊄B_1. A fortiori, there exists φ_μ∈Φ_p'(L_μ) and a coordinate index ℓ_μ∈{1,…,n} such thatη(F_μ)_ℓ>1/√(n),F_μ:=∫_L_μ f φ_μ x̣/|L_μ|.Let M_ℓ={μ∈{1,…,m}: ℓ_μ=ℓ}. As {1,…,m}=∪{M_ℓ: ℓ=1,…, n} it suffices to show that1/|3Q|∑_μ∈ M_ℓ |L_μ|=:δ< Cη^p.Using the membership φ_μ∈Φ_p'(L_μ) for the first inequality and the disjointness of the supports for the second equality 1 ≥1/δ∑_μ∈ M_ℓ∫_L_μ∩ 3Q |φ_μ|^p' x̣/|3Q| = ∫_3Q |φ|^p' x̣/|3Q|, φ:=δ^-1/p'∑_μ∈ M_ℓφ_μ1_3Q so that φ∈Φ_p'(3Q). In particular,beginning with the right inclusion in (<ref>) and using (<ref>) in the last inequality√(n)≥∫_3Q (f φ)_ℓ x̣/|3Q|= δ^-1/p'∑_μ∈ M_ℓ|L_μ|/|3Q| (F_μ)_ℓ > 1/η√(n)δ^1/pwhich rearranging yields (<ref>) with C=n^p, thus completing the proof.At this point, let 𝒮_kbe a collection of pairwise disjoint cubes as in Step 3 of the proof of Theorem <ref>. The elements of the collection𝒮_k+1are defined tobe the maximal dyadic cubesLsuch that the same condition as in (<ref>) holds, provided the definition ofE_Qtherein is replaced with the one in (<ref>). By virtue of Lemma <ref>, (<ref>) still holdsprovidedηis chosen small enough. And,we still obtain that𝒮_k+1(Q)={L ∈𝒮_k+1: L⊂3Q }is a stopping collection. By definition of𝒮_k+1, it must be that ł f_j1_3Q_̊p_j, K⊂Cł f_j _̊p_j,3Q whenever the (not necessarily dyadic) cubeKis such that a moderate dilateCKofKcontains2^5 Lfor someL ∈𝒮_k+1(Q).FixQfor a moment and letA_j=(A_j^mμ:1≤m,μ≤n)∈GL(^n),j=1,2bechosen such that the John ellipsoid ofłf_j_̊3Q,p_jisB_1, or its intersection with a lower dimensional subspace in a degenerate case,andA_j f_j:= f_j. It follows from (<ref>) that if2^5L⊂CK thenłf_j1_3Q_̊p_j, K ⊂B_C. This fact, together with Lemma <ref> readily yields the estimates f_j_𝒴_p_j(𝒬)≲ 1,j=1,2.We are ready to obtain a substitute for (<ref>). In fact|ł T[K]^t_2∧ s_Q_t_1⊗Id_^n(f_11_Q),f_2| ≤ |Q||∑_m=1^n𝒬[K]^t_2 _t_1(f_1m ,f_2m) |+∑_L ∈𝒮_k+1(Q)L ⊂ Q|ł T[K]^t_2∧ s_L_t_1⊗Id_^n(f_11_L),f_2|andby actions ofGL(^n), see the proof of <cit.>,|∑_m=1^n𝒬[K]^t_2 _t_1(f_1m ,f_2m)| = |∑_m,μ_1,μ_2=1^nA_1^mμ_1 A_2^mμ_2𝒬[K]^t_2 _t_1(f_1μ_1 ,f_2μ_2) | ≲ł f_1_̊p_1, 3Qł f_2_̊p_2, 3Qsup_μ_1,μ_2 |𝒬[K]^t_2 _t_1(f_1μ_1 ,f_2μ_2)| ≲ł f_1_̊p_1, 3Qł f_2_̊p_2, 3Qwhere we also employed (<ref>) coupled with (<ref>) in the last line. Assembling together the last two displays yields the claimed vector-valued version of (<ref>), and finishes the proof of Theorem <ref>.§ PROOF OF THEOREM <REF>We begin with the proof of the first point. As a direct application of the main result of<cit.>,|⟨T f, g⟩|≤c_d,pε^-1[v]_A_r^1/1+ε-1/p'([u]_A_∞^1/p+ [v]_A_∞^1/p')f_L^p(w)g_L^p'(σ),wherer= ( (1+ε)'/p)' (p/1+ε-1)+1=p+ p(p-2)ε/1-(p-1)ε,v=σ^1+ε/1+ε-p'=w^1+ε p'/p'-(1+ε),u=w^1+ε/1+ε-pBy definition,[v]_A_r^1/1+ε-1/p' = sup_Q (1/|Q|∫_Q w^1+ε p'/p'-(1+ε))^1/1+ε-1/p'(1/|Q|∫_Q σ^1+ε p/p-(1+ε))^(r-1)(1/1+ε-1/p')= sup_Q (1/|Q|∫_Q w^1+ε p'/p'-(1+ε))^1/p1/1+ ε p'/p'-(1+ε)(1/|Q|∫_Qσ^1+ε p/p-(1+ε))^1/p'·1/1+ε p/p-(1+ε).By the sharp reverse Hölder inequality <cit.>, takingε= 1/τ_dmax{p, p'}max{[w]_A_∞,[σ]_A_∞}, we can concludeT_L^p(w)≤ c_d,p [w]_A_p^1/p([w]_A_p^1/p'+[σ]_A_∞^1/p) max{[σ]_A_∞, [w]_A_∞}≤ c_d,p [w]_A_p^2max{1,1/p-1}. Next let us prove the Fefferman-Stein type inequality of the second point. Indeed, letA(t)=t^pr/r̃andB̅(t)=t^1/2(p/r̃+1), where1<r<pandr̃=pr-r-1/2/pr-(r-1). Thensup_Q ⟨ w^r ⟩_Q^1/pr(M_rw)^-r̃/p_B, Q^1/r̃ ≤sup_Qinf_x∈ Q (M_rw)^1/p(M_rw)^-r̃/p_B, Q^1/r̃≤ 1.Letv=M_r w. Now we have,r̃'∑_Q∈𝒮|Q|⟨ f⟩_r̃, Q⟨ gw^1/p⟩_r̃, Q = r̃' ∑_Q∈𝒮⟨ f^r̃ v^r̃/p v^-r̃/p⟩_Q^1/r̃⟨ gw^1/p⟩_r̃, Q|Q|≤r̃' ∑_Q∈𝒮 f^r̃ v^r̃/p_B̅, Q^1/r̃v^-r̃/p_B, Q^1/r̃w^r̃/p_A, Q^1/r̃g^r̃_A̅, Q^1/r̃|Q|≤ 2 r̃' ∑_Q∈𝒮 f^r̃ v^r̃/p_B̅, Q^1/r̃g^r̃_A̅, Q^1/r̃ |E_Q|≤ 2r̃' ∫ M^𝒟_B̅(f^r̃ v^r̃/p)^1/r̃ M^𝒟_A̅(t^r̃)(g) ≤ 2 r̃' M^𝒟_B̅(f^r̃ v^r̃/p)^1/r̃_L^p M^𝒟_A̅(t^r̃)(g)_L^p'≤ c_d p^2 (p')^1/p (r')^1+1/p'f_L^p(v)g_L^p'.By sparse domination formula and duality,T(f)_L^p(w)≤c_d p^2 (p')^1/p (r')^1+1/p'f_L^p(M_r w).Notice that theA_1-A_∞estimate just follows from the sharp reverse Hölder inequality, so that we may restrict toq>1. The idea is still viewingA_qcondition as a bumpedA_pcondition (see <cit.>). LetC(t)=t^p/r(q-1). We haver' ∑_Q∈𝒮|Q|⟨ f⟩_ r, Q⟨ g w ⟩_r, Q≤ r' ∑_Q∈𝒮|Q|⟨ f^r w^r/p⟩_C̅, Q^1/r⟨ w^-r/p⟩_C, Q^1/r⟨ g^rs w ⟩_Q^1/rs⟨ w^(r-1/s)s'⟩_Q^1/rs'.Taker=1+1/8p(p/q)'τ_d [w]_A_∞, s=1+1/4(p/q)'p.Thenrs<1+1/2p<p',r< 1+1/8(p/q-1)<p/qand(r-1/s)s'<1+ 1/τ_d[w]_A_∞. Then applying the sparse domination, and the sharp reverse Hölder inequality we obtainT(f)_L^p(w) ≤sup_g_L^p'(w)=1r' ∑_Q∈𝒮|Q|⟨ f⟩_ r, Q⟨ g w ⟩_r, Q≤sup_g_L^p'(w)=1 c_d,p,q [w]_A_∞∑_Q∈𝒮|Q|⟨ f^r w^r/p⟩_C̅, Q^1/r⟨ g⟩_rs, Q^w⟨ w ⟩_Q⟨ w^-1/q-1⟩_Q^q-1/p≤sup_g_L^p'(w)=1c_d,p,q [w]_A_∞ [w]_A_q^1/p∑_Q∈𝒮⟨ f^r w^r/p⟩_C̅, Q^1/r⟨ g⟩_rs, Q^w w(Q)^1/p' |Q|^1/p≤sup_g_L^p'(w)=1 c_d,p,q [w]_A_∞ [w]_A_q^1/p(∑_Q∈𝒮⟨ f^r w^r/p⟩_C̅, Q^1/r |Q| )^1/p(∑_Q∈𝒮(⟨ g⟩_rs, Q^w)^p' w(Q) )^1/p'≤ c_d,p,q [w]_A_∞^1+1/p' [w]_A_q^1/pf_L^p(w), where in the last step we have used the Carleson embedding theorem; we omit the routine details.Finally, we prove the Coifman-Fefferman type inequality. Fixε>0and denoteη=1+ε. Also letr= 1+1/8pη' τ_d [w]_A_∞, s=1+1/4η' p. Then againrs<1+1/2p<p',r<ηand(r-1/s)s'<1+ 1/τ_d[w]_A_∞. Applying the sparse domination formula again, we obtainT(f)_L^p(w) ≤sup_g_L^p'(w)=1r' ∑_Q∈𝒮|Q|⟨ f⟩_ r, Q⟨ g w ⟩_r, Q≲sup_g_L^p'(w)=1η' [w]_A_∞∑_Q∈𝒮⟨ f⟩_η, Q⟨ g⟩_rs, Q^w w(Q)≲sup_g_L^p'(w)=1η' [w]_A_∞^2 ∫_ℝ^d M_η f M_rs, w^𝒟 (g) w dx ≲η' [w]_A_∞^2M_η f_L^p(w). amsplain
http://arxiv.org/abs/1706.09064v1
{ "authors": [ "Francesco Di Plinio", "Tuomas P. Hytönen", "Kangwei Li" ], "categories": [ "math.CA", "math.FA", "42B20 (Primary), 42B25 (Secondary)" ], "primary_category": "math.CA", "published": "20170627220917", "title": "Sparse bounds for maximal rough singular integrals via the Fourier transform" }
arrows decorations patternsvtx=[circle, inner sep= 0pt, minimum size= 1.2mm, fill]margin=35pt,font=small,format=hang,labelfont=bfmyheadingsteTheorem[section] pro[te]Proposition deDefinition[section] exExample[section] co[te]Corollary lemma[te]Lemma propProperty[section] problemProblem[section] conjectureConjecture[section] questionQuestion[section] reRemark[section] claimClaim[section] observatonObservation[section]
http://arxiv.org/abs/1706.08680v1
{ "authors": [ "Darko Dimitrov", "Zhibin Du", "Carlos M. da Fonseca" ], "categories": [ "cs.DM" ], "primary_category": "cs.DM", "published": "20170627062024", "title": "Forbidden branches in trees with minimal atom-bond connectivity index" }
In this paper we study the behavior of finite dimensional fixed point iterations, induced by discretization of a continuous fixed point iteration defined within a Banach space setting. We show that the difference between the discrete sequence and its continuous analogue can be bounded in terms depending on the mesh size of the discretization and the contraction factor, defined by the continuous iteration. Furthermore, we show that the comparison between the finite dimensional and the continuous fixed point iteration naturally paves the way towards a general a posteriori error analysis that can be used within the framework of a fully adaptive solution procedure. In order to demonstrate our approach, we use the Galerkin approximation of singularly perturbed semilinear monotone problems. Our scheme combines the fixed point iteration with an adaptive finite element discretization procedure (based on a robust a posteriori error analysis), thereby leading to a fully adaptive Fixed-Point-Galerkin scheme. Numerical experiments underline the robustness and reliability of the proposed approach. [2010]62F35, 35J61, 65N30, 65L11Re-Evaluating the Netflix Prize - Human Uncertainty and its Impact on Reliability Sergej Sizov October 24, 2017 ===================================================================================§ INTRODUCTION In this work we study the numerical approximation of problems given by: find u ∈ X: (u)=0, in X', where :X → X' signifies a possibly nonlinear operator.Here, X stands for a real Hilbert space, with inner product denoted by (·,·)_X and induced norm x_X= √((x,x)_X). Furthermore, X' signifies the dual of X.§.§.§ Fixed Point Galerkin MethodsAs a result of the possible nonlinearity of , for a given u^0∈ X, we consider the fixed point iteration(u^n+1,v)_X=(u^n)(v), ∀ v ∈ X,where, for a fixed t>0, the operatoris defined by(u)(v):=(u,v)_X- t·(u)(v), ∀ v∈ X.To guarantee the convergence of the above iteration, we suppose thatsatisfies the following two assumptions, namely the Lipschitz continuity propertyA1(x)(v)-(y)(v)≤ Lx-y_Xv_X, L>0,as well as the strong monotonicity propertyA2 ((x)-(y))(x-y)≥ cx-y_X^2,c>0. Under these assumptions, it is well known that there exists a unique solution u ∈ X solving (<ref>) (see, e.g. <cit.>, or section <ref> in this work). More precisely, it can be shown that the operator defined in (<ref>) is contractive, i.e., there holds(x)(v)-(y)(v)≤αx-y_Xv_X, α= √(1-(c/L)^2)∈ (0,1),and therefore, based on Banach's fixed point Theorem, the solution can be obtained by iterating (<ref>) with optimal step size t_opt=c/L^2 (see also <cit.>).However in actual computations, we can only solve a finite dimensional analogue of equation (<ref>). More precisely, let X_h⊂ X be a linear finite dimensional subspace of X.We then observe(x_h)(v_h)-(y_h))(v_h)≤α_hx_h-y_h_Xv_h_X,with 0<α_h≤α from where we get the existence of a unique x_h∈ X_h such that (x_h)(v_h)=(x_h,v_h)_X ∀ v_h ∈ X_h and solving (x_h)(v_h)=0, ∀ v_h∈ X_h. For a given initial value u_0^h∈ X_h, the solution can be obtained by the fixed point iteration(u_h^n+1,v_h)_X=(u_h^n)(v_h), ∀ v_h∈ X_h. In order to establish a possible a priori error analysis, we will focus (following the argument from <cit.> for Newton's method) on thedistance, between the sequence {u^n}_n≥ 0, corresponding to the iteration from (<ref>), and its discrete analogue {u_h^n}_n≥ 0 defined in (<ref>). More precisely: we will show that the discrete sequence tracks its continuous analogue with a maximal distance, which can be bounded in terms depending on the mesh size parameter h>0 and the contraction factor α∈ (0,1) defined in (<ref>). As it turns out, the above outlined approach naturally paves the way towards an a posteriori error analysis, where two different error indicators contribute to the a posteriori error bound.They are caused by the nonlinearity of the problem and its discretization. Based on these two error contributions, and following along the lines of <cit.>, we formulate an adaptive procedure. More specifically: as long as our adaptive porcedure is running, we either perform a fixed point iteration or refine the current space X_h based on the derived error indicators. In order to test such a fully adaptive procedure, we concentrate on semilinear elliptic boundary value problems with possible singular perturbations.§.§.§ Linearization schemesLet us briefly address two possible strategies when solving nonlinear problems numerically. Firstly, the nonlinear problem can be formulated within an accurate finite dimensional framework. Based on a suitable iterative scheme, the resulting nonlinear finite dimensional problem will be linearized. Alternatively, a local linearization is applied. This leads to a sequence of linear problems which afterwards will be discretized by some suitable numerical approximation schemes. It is noteworthy that the second approach offers the application of the existing numerical analysis and the computational techniques for linear problems (such as e.g. the development of classical residual-based error bounds). The concept of approximating infinite dimensional nonlinear problems by appropriate linear discretization schemes has been studied by several authors in the recent past. For example, the approach presented in <cit.> (see also the work <cit.>) combines fixed point linearization methods and Galerkin approximations in the context of strictly monotone problems. Similarly, in <cit.>, the nonlinear PDE problems at hand are linearized by an (adaptive) Newton technique, and subsequently discretized by a linear finite element method. On a related note, the discretization of a sequence of linearized problems resulting from the local approximation of semilinear evolutionary problems has been investigated in <cit.>. In all of the works <cit.>, the key idea in obtaining fully adaptive discretization schemes is to provide a suitable interplay between the underlying linearization procedure and (adaptive) Galerkin methods; this is based on investing computational time into whichever of these two aspects is currently dominant.§.§.§ OutlineThe outline of this paper is as follows. In Section <ref> we study fixed point iterations within the context of general Hilbert spaces and derive an apriori and a posteriori error analysis. Subsequently, the purpose of Section <ref> is the discretization of the resulting sequence of linear problems by the finite element method and the development of an ε-robust a posteriori error analysis. The final estimate (Theorem <ref>) bounds the error in terms of the (elementwise) finite element approximation (FEM-error) and the error caused by the fixed point iteration of the original problem. Then, in order to define a fully adaptive Fixed-Point-Galerkin scheme, we propose an interplay between the adaptive method and the adaptive finite element approach: More precisely, as the adaptive procedure is running, we either perform a fixed point iteration or refine the current finite element mesh based on the a posteriori error estimate (Section <ref>); this is carried out depending on which of the errors (FEM-error or fixed point error) ismore dominant in the present iteration step. In Section <ref> we provide a numerical experiment which shows that the proposed scheme is reliable and ε-robust for reasonable choices of initial guesses. Finally, we summarize and comment our findings in Section <ref>. § APRIORI AND A POSTERIORI ERROR ESTIMATES First of all and with the purpose of completness, we recall the well known Banach's fixed point Theorem (see, e.g. <cit.>), which asserts that any Lipschitz continuous map :X→ X with Lipschitz constant α∈ (0,1), and operating over a Banach space X, possesses a unique fixed point. Furthermore, for a given initial guess x_0∈ X, the fixed point can be obtained through iteration of x_n+1=(x_n)n≥ 0. Incidentally, there holds the following apriori error estimate x-x_n_X≤α^n/1-αx_0-x_1_X.We now show that the difference u_h^n-u^n between the discrete sequence {u_h^n}_n≥ 0 and its continuous analogue {u^n}_n≥ 0 is bounded by the discretization error and the contraction constant α∈ (0,1) . Indeed, there holds the following result:For given initial values u_h^0= u^0∈ X_h⊂ X we consider the iterations (u^n+1,v)_X=(u^n)(v)∀ v∈ X,(u^n+1_h,v_h)_X=(u_h^n)(v_h)∀ v_h∈ X_h.Assume that the disretization of X is fine enough such that(u_h^n)(v)-(u_h^n+1,v)_X≤η_hv_X, ∀ v∈ X.Then there holds u^n+1-u_h^n+1_X≤η_h/1-αfor all n∈ℕ.We proceed along the lines of <cit.>, where the authors show a similar result for Newton's method. The argument is inductive. Therefore, given a sequence {ε_k}_k≥ 0⊂ℝ_≥ 0 and assuming that there holds u^n-u_h^n_X≤ε_n.Since u^0=u_h^0, we can choose ε_0=0 for n=0. For n≥ 0, we further introduce sequences {ũ^k,n}_k≥ n generated by the iteration(ũ^k+1,n,v)_X=(ũ^k,n)(v), ũ^n,n:=u_h^n,v∈ X.We note that these sequences start at the discrete points u_h^n and are convergent to the unique zero u^∞ of(see Figure <ref>). Induction step: employing the triangle inequality we get u^n+1-u_h^n+1_X≤u^n+1-ũ^n+1,n_X+ũ^n+1,n-u_h^n+1_X.The first term can be estimated using (<ref>) and the α-Lipschitz continuity of :u^n+1-ũ^n+1,n_X^2 =(u^n+1-ũ^n+1,n,u^n+1-ũ^n+1,n)_X=((u^n)-(ũ^n,n))(u^n+1-ũ^n+1,n)≤αu^n-ũ^n,n_Xu^n+1-ũ^n+1,n_X= αu^n-u_h^n_Xu^n+1-ũ^n+1,n_X≤αε_nu^n+1-ũ^n+1,n_X,i.e., we haveu^n+1-ũ^n+1,n_X≤αε_n. For the second term ũ^n+1,n-u_h^n+1_X in (<ref>) we use (<ref>) and observe: ũ^n+1,n-u_h^n+1_X^2= (ũ^n+1,n,ũ^n+1-u_h^n+1)_X-(u_h^n)(ũ^n+1,n-u_h^n+1)+(u_h^n)(ũ^n+1,n-u_h^n+1)-(u_h^n+1,ũ^n+1,n-u_h^n+1)_X ≤ (u_h^n)(ũ^n+1,n-u_h^n+1)-(u_h^n+1,ũ^n+1,n-u_h^n+1)_X ≤ η_hũ^n+1,n-u_h^n+1_X,and thereforeũ^n+1,n-u_h^n+1_X≤η_h. Taking (<ref>) into account we can bound (<ref>) byu^n+1-u_h^n+1_X≤αε_n+η_h=:ε_n+1. We now consider the fixed point iteration (see Figure <ref>)ε_n+1=αε_n+η_h, ε_0=0.Since α∈ (0,1) we conlude that ε_n≤ε_∞=η_h/1-α for all n≥ 0 which completes the inductive and therefore the whole argument.Based on this result, there holds the following apriori bound: For any initial value u^0∈ X_h there holds the following apriori error estimate:u-u_h^n_X≤1/1-α(α^nu^1-u^0_X+η_h).By virtue of the triangle inequality and employing (<ref>) together with (<ref>) we obtainu-u_h^n_X≤u-u^n_X+u^n-u_h^n_X≤1/1-α(α^nu^1-u^0_X+η_h).Based on the Lipschitz continuity ofwe readily infer(u_h^n)_X'=(u)-(u_h^n)_X'≤ Lu-u_h^n_X.Thus we can bound the residual (u_h^n) as follows:There holds (u_h^n)_X'≤L/1-α(α^nu^1-u^0_X+η_h), (x)_X':=sup_v_X=1(x)(v). §.§ A posteriori error analysisFollowing along the lines of <cit.>, we now want to exploit an a posteriori error analysis for strictly monotone operators. In doing so, we first recall the well known Theorem of Zarantonello (see <cit.>): Suppose that :X→ X' is L-Lipschitz and strongly monotone, with monotonicity constant c. Then there exists a unique zero for .Let (x)(y)=(x,y)_X denote the Riesz-Isometry :X→ X'. We show that :X→ X given by (x):=^-1((x)) is α-contractive and therefore possesses a unique fixed point u which is the desired zero for the operator (x):=^-1((x)). We have (x)-(y)_X^2=x-y_X^2-2t(x-y,(x)-(y))_X+t^2(x)-(y)_X^2.Notice that(x)-(y)_X=(x)-(y)_X'≤ Lx-y_X,and((x)-(y))(x-y)=((x)-(y))(x-y)=((x)-(y),x-y)_X≥ cx-y_X^2.Thus we arrive at (x)-(y)_X^2≤ f(t)·x-y_X^2, with f(t):=1-2ct+(Lt)^2. Moreover we observe that for t∈ (0,2c/L^2) we have √(f(t))<1. Note that the optimal contraction constant isα_opt:=f(t_opt)=√(1-c^2/L^2) with t_opt:=c/L^2.Based on this result there holds the following a posteriori error estimate: If :X→ X' satisfies (<ref>) and (<ref>), then for e_h^n+1:=u-u_h^n+1, there holds the a posteriori error bound:e_h^n+1_X≤L^2/c^2·η_h+ L/c(1+L/c)·u_h^n+1-u_h^n_X. We follow along the lines of <cit.>.Recalling the coercivity ofwe getce_h^n+1_X^2≤ -((u_h^n+1),e_h^n+1)_X.Next, we recall the sequences {ũ^k,n}_k≥ n given in (<ref>), i.e., we observe(ũ^n+1,n,e_h^n+1)_X=((ũ^n,n),e_h^n+1)_X=((u_h^n),e_h^n+1)_X=(u_h^n,e_h^n+1)_X-t((u_h^n),e_h^n+1)_X.By virtue of the Cauchy-Schwarz inequality and the Lipschitz continuity ofwe getce_h^n+1_X^2 ≤-((u_h^n+1),e_h^n+1)_X+t^-1(ũ^n+1,n-u_h^n,e_h^n+1)_X+((u_h^n),e_h^n+1)_X= t^-1(ũ^n+1,n-u_h^n+1,e_h^n+1)_X+t^-1(u_h^n+1-u_h^n,e_h^n+1)_X+((u_h^n),e_h^n+1)_X-((u_h^n+1),e_h^n+1)_X ≤t^-1(ũ^n+1,n-u_h^n+1_X+u_h^n+1-u_h^n_X)e_h^n+1_X+Lu_h^n+1-u_h^n_Xe_h^n+1_X.Dividing by ce_h^n+1_X and using t:=c/L^2 we obtaine_h^n+1_X≤L^2/c^2ũ^n+1,n-u_h^n+1_X+L/c(1+L/c)u_h^n+1-u_h^n_X.Recalling (<ref>) we conclude (<ref>). § APPLICATION TO SEMILINEAR PROBLEMS§.§.§ Problem formulationIn this section, we focus on the numerical approximation procedure for semilinear elliptic boundary value problems with possible singular perturbations. More precisely, for a fixed parameter ε>0 (possibly with ε≪ 1), and a continuous function f:ℝ→ℝ, we consider the problem of finding u:Ω→ℝ that satisfies-εΔ u= f(x,u), in Ω,u =0, on ∂Ω. Here, Ω⊂ℝ^d, with d=1 or d=2, is an open and bounded 1d interval or a 2d Lipschitz polygon, respectively. Problems of this type appear in a wide range of applications including, e.g., nonlinear reaction-diffusion in ecology and chemical models <cit.>, economy <cit.>, or classical and quantum physics <cit.>.In this work, we are interested in a unique solution u∈ X:=H_0^1(Ω) solving (<ref>): here, we denote by H_0^1(Ω) the standard Sobolev space of functions in H^1(Ω)=W^1,2(Ω) with zero trace on ∂Ω. Furthermore, the weak formulation of (<ref>) is to find u∈ X:=H_0^1(Ω) such that _ε(u)(v)=0, ∀ v∈ X,where _ε(u)(v):=∫_Ω{ε∇ u ∇ v-f(u)v}.In addition, we introduce the inner product(u,v)_X:=∫_Ω{uv+ε∇ u ·∇ v}, u,v∈ X,with induced norm on X given byu_ε,D:=(ε∇ u_0,D^2 +u_0,D^2 )^1/2, u∈ H^1(D),where ·_0,D denotes the L^2-norm on D. Frequently, for D=Ω, the subindex `D' will be omitted. Note that in the case of f(u)=-u+g, with g ∈ L^2(Ω), i.e., when (<ref>) is linear and strongly elliptic, the norm ·_ε,Ω is a natural energy norm on X.In what follows we shall use the abbreviation x≼ y to mean x≤ cy, for a constant c>0 independent of the mesh size h and of ε>0.§.§.§ Fixed-Point-IterationFor u^n∈ X and t>0, the fixed-point iteration is to find u^n+1∈ X from u^n such that(u^n+1,v)_X=_ε(u^n)(v), ∀ v∈ X, where for fixed u∈ X, t>0, we set_ε(u)(v):=(u,v)_X-t∫_Ω{ε∇ u ·∇ v-f(u)v}.We assume that f is Lipschitz continuous with Lipschitz constant L_f. Furthermore we need the following monotonicity property:(f(x)-f(y))(x-y)≤ -c_f (x-y)^2,c_f>0.As a consequence, the operatorin (<ref>) is Lipschitz continuous and strongly monotone with Lipschitz constant L=max(1,L_f) and monotonincity constant c=min(1,c_f) respectively. In particular, applying the iteration given in (<ref>), we obtain a unique fixed point x∈ X for _ε which is the unique root of _ε given in (<ref>).§.§ Galerkin DiscretizationIn order to provide a numerical approximation of (<ref>), we will discretize the linear weak formulation (<ref>) by means of a finite element method, which constitutes a Fixed-Point-Galerkin approximation scheme. Furthermore, we shall derive a posteriori error estimates for the finite elementdiscretization which allow for an adaptive refinement of the meshes in each iteration step.This, together with the a posteriori error estimate from Proposition <ref> leads to a fully adaptive Fixed-Point-Galerkin discretization methodfor (<ref>).§.§.§ Finite Element Meshes and SpacesLet 𝒯^h={T}_T∈𝒯^h be a regular and shape-regular mesh partition of Ω into disjoint open simplices, i.e., any T∈𝒯^h is an affine image of the (open) reference simplex T={x∈ℝ_+^d: ∑_i=1^dx_i<1}. By h_T=diam(T) we signify the element diameter of T∈𝒯^h, and by h=max_T∈𝒯^hh_T the mesh size. Furthermore, by ℰ^h we denote the set of all interior mesh nodes for d=1 and interior (open) edges for d=2 in 𝒯^h. In addition, for T∈𝒯^h, we let ℰ^h(T)={E∈ℰ^h: E⊂∂ T}. For E∈ℰ^h, we let h_E be the mean of the lengths of the adjacent elements in 1d, and the length of E in 2d. Let us also define the following two quantities:α_T :=min(1,ε^-1/2h_T),α_E:=min(1,ε^-1/2h_E),for T∈𝒯^h and E∈ℰ^h, respectively.We consider the finite element space of continuous, piecewise linear functions on 𝒯^h with zero trace on ∂Ω, given byV_0^h:={φ∈ H^1_0(Ω): φ|_T∈ℙ_1(T)∀ T ∈𝒯^h},respectively, where ℙ_1(T) is the standard space of all linear polynomial functions on T. §.§.§ Linear Finite Element DiscretizationFor t=c/L^2 and u_h^n ∈ V_0^h, n≥ 0, we consider the finite element approximation of (<ref>), which is to find u_h^n+1∈ V_0^h such that (u_h^n+1,v_h)_X=_ε(u_h^n)(v_h), ∀ v_h∈ V_0^h,where, for a fixed u_h∈ V_0^h, _ε(u_h)(v_h)=(u_h,v_h)_X-t∫_Ω{ε∇ u_h·∇ v_h-f_h(u_h)v_h}.Here, f_h(u_h)∈ V_h is defined through∫_Ω(f(u_h)-f_h(u_h))v_h=0,∀ v_h∈ V_0^h.More precisely, if V_0^h is spanned by the basis functions {ϕ_i}_i=1^N we solve the algebraic system∑_k =1^NB_kiu_k^n+1=∑_k =1^NB_kiu_k^n-tε∑_k =1^NA_kiu_k^n+t b(u_h^n)_i,i∈{1,…,N},with respect to {u_k^n+1}_k=1^N and set u_h^n+1=∑_k=1^Nu_k^n+1ϕ_k.In (<ref>), B signifies the iteration matrix given by B_ik=(ϕ_i,ϕ_k)_X and A denotes the stiffness matrix. Moreover, for i={1,…,N} the load vector is given by b(u_h^n)_i:=∫_Ωf(u_h^n)ϕ_i. §.§ A Posteriori AnalysisThe aim of this section is to derive a posteriori error bounds for the FEM iteration (<ref>). In view of Proposition <ref> it is sufficient to derive a computable quantity η(u_h^n+1,V_0^h).Therefore we introduce the quantity:ẽ_h^n+1,n:=ũ^n+1,n-u_h^n+1.Moreover, let : H_0^1(Ω)→ V_0^h be the quasi-interpolation operator of Clément (see, e.g., <cit.>) and set v_h:=ẽ_h^n+1,n.We observeẽ_h^n+1,n_ε^2 =(ũ^n+1,n,ẽ_h^n+1,n)_X-(u_h^n+1,ẽ_h^n+1,n)_X=(ũ^n+1,n,ẽ_h^n+1,n-v_h)_X-(u_h^n+1,ẽ_h^n+1,n-v_h)_X=-t∫_Ω{ε∇ u_h^n∇(ẽ_h^n+1,n-v_h)-f(u_h^n)(ẽ_h^n+1,n-v_h)}-(u_h^n+1-u_h^n,ẽ_h^n+1,n-v_h)_X=-∑_T∈_h∫_T{ε∇ (u_h^n+1-u_h^n) ∇ (ẽ_h^n+1-v_h)+tε∇ u_h^n∇ (ẽ_h^n+1,n-v_h)} +∑_T∈_h∫_T{tf_h(u_h^n)(ẽ_h^n+1,n-v_h)-(u_h^n+1-u_h^n)(ẽ_h^n+1,n-v_h)} +∑_T∈_h∫_Tt(f(u_h^n)-f_h(u_h^n))(ẽ_h^n+1,n-v_h).Integrating by parts in the first term on the right-hand side, recalling the fact that (v- v)=0 on ∂Ω, and applying some elementary calculations, yields that ẽ_h^n+1,n_ε^2=∑_T∈_h(b_T+c_T)+∑_E∈_ha_Ewhereb_T :=∫_T{(εΔ (u_h^n+1-u_h^n)-(u_h^n+1-u_h^n)+tεΔ u_h^n +tf_h(u_h^n))(ẽ_h^n+1,n-v_h)},a_E :=∫_∂ Eε∇ (u_h^n+1-u_h^n)+t∇ u_h^n(ẽ_h^n+1,n-v_h),c_T :=∫_Tt(f(u_h^n)-f_h(u_h^n))(ẽ_h^n+1,n-v_h). with E∈ℰ^h, T∈𝒯^h. Here, for any edge E=∂ T^♯∩∂ T^♭∈ℰ^h shared by two neighboring elements T^♯, T^♭∈𝒯^h, where n^♯ and n^♭ signify the unit outward vectors on ∂ T^♯ and ∂ T^♭, respectively, we denote by ∇ u_h^n+1( x)=lim_t→ 0^+∇ u_h^n+1( x+t n^♯)· n^♯+lim_t→ 0^+∇ u_h^n+1( x+t n^♭)· n^♭, x∈ E,the jump across E.Then, for T∈𝒯^h, defining the FEM-errorη_n+1,T^2 := α_T^2 εΔ (u_h^n+1-u_h^n)-(u_h^n+1-u_h^n)+tεΔ u_h^n +tf_h(u_h^n)_0,T^2+1/2∑_E∈ℰ^h(T)ε^-1/2α_Eε∇ (u_h^n+1-u_h^n)+t∇ u_h^n_0,E^2.with α_T and α_E from (<ref>), we proceed along the lines of the proof of <cit.> in order to obtain the following result. For n≥ 0 there holds the upper a posteriori boundẽ_h^n+1,n_ε≼(tf(u_h^n)-f_h(u_h^n)_0,Ω^2+∑_T∈_hη_n+1,T^2)^1/2=:η(u_h^n+1,V_0^h)with η_n+1,T , T∈𝒯^h from (<ref>).Using Proposition <ref> together with the bound (<ref>), we end up with the following a posteriori error bound: For n≥ 0 there holds the upper a posteriori error bound e_h^n+1_ε≼η(u_h^n+1,V_0^h)+η_FP(u_h^n+1,u_h^n,V_0^h),with η_FP(u_h^n+1,u_h^n,V_0^h):=u_h^n+1-u_h^n_ε, T∈𝒯^h and η(u_h^n+1,V_0^h), from (<ref>), respectively.§.§ A Fully Adaptive Fixed-Point-Galerkin AlgorithmWe will now propose a procedure that will combine the fixed point iteration procedure with an automatic finite element mesh refinement strategy. More precisely, based on the a posteriori error bound from Theorem <ref>, the main idea of our approach is to provide an interplay between the fixed point iterations and adaptive mesh refinements which is based on monitoring the error indicators in (<ref>) and (<ref>), and on acting according to whatever quantity is dominant in the current computations.The individual computational steps are summarized in Algorithm <ref>.§ NUMERICAL EXPERIMENT We will now illustrate and test the above fully adaptive Algorithm <ref> with a numerical experiment in 2d. The linear systems resulting from the finite element discretization (<ref>) are solved by means of a direct solver.We consider the problem-εΔ u = f(u), inΩ=(0,1)^2,u = 0 on∂Ω,with f(u)=1-u/1+e^-(u-1)^2. Here ∂_uf(u) is uniformely bounded (roughly by 1.3) and ∂_uf(u)≤∂_u f(u)|_u=1=-1/2.Henceforth we have L=L_f≈ 1.3, c=c_f=1/2, i.e., the problem is well defined. The focus of this experiment is on the robustness of the a posteriori error bound (<ref>) with respect to the singular perturbation parameter ε asε→ 0. Indeed, neglecting the boundary conditions for a moment, one observes that the unique positive zero u = 1 of f(u) is a solution of the PDE. We therefore expect boundary layers along ∂Ω; see Figures <ref> and <ref> (right). Starting from the initial mesh depicted in Figure <ref> (left) with u_0^h(1/2,1/2)= 1, we test the fully adaptive fixed point Galerkin Algorithm <ref> for different choices of ε={10^-i}_i=0^8.In Algorithm <ref> the parameter θ is chosen to be 0.5. Furthermore, in this example, the procedure is always initiated with a uniform initial mesh 𝒯 consisting of 4 elements; see Figure <ref> (left). As ε→ 0 the resulting solutions feature ever stronger boundary layers; see Figures <ref> and <ref>(right). Again we see that the performance data in Figure <ref> shows errors decay, firstly, robust in ε, and, secondly, of (optimal) order 1/2 with respect to the number of degrees of freedom.§ CONCLUSIONS The aim of this paper is to introduce a reliable and computationally feasible procedure for the numerical solution of semilinear elliptic boundary value problems, with possible singular perturbations. The key idea is to combine a simple fixed point method with an automatic mesh refinement finite element procedure. Furthermore, the sequence of linear problems resulting from the application of the fixed point iteration and Galerkin discretization is treated by means of a robust (with respect to the singular perturbations) a posteriori error analysis and a corresponding adaptive mesh refinement process. Our numerical experiments clearly illustrate the ability of our approach to reliably find the solution of the underlying well posed problem, and to robustly resolve the singular perturbations at an optimal rate.amsplain
http://arxiv.org/abs/1706.09299v1
{ "authors": [ "Mario Amrein" ], "categories": [ "math.NA" ], "primary_category": "math.NA", "published": "20170627135952", "title": "Adaptive Fixed Point Iterations for Semilinear Elliptic Partial Differential Equations" }
A note on first-order spectra with binary relations]A note on first-order spectra with binary relationsE. Kopczyński]Eryk Kopczyński University of Warsaw [email protected]. Tan]Tony Tan National Taiwan University [email protected] The spectrum of a first-order sentence is the set of the cardinalities of its finite models. In this paper, we consider the spectra of sentences over binary relations that use at least three variables. We show that for every such sentence Φ, there is a sentence Φ' that uses the same number of variables, but only one symmetric binary relation, such that its spectrum is linearly proportional to the spectrum of Φ. Moreover, the models of Φ' are all bipartite graphs. As a corollary, we obtain that to settle Asser's conjecture, i.e., whether the class of spectra is closed under complement, it is sufficient to consider only sentences using only three variables whose models are restricted to undirected bipartite graphs. [ [ December 30, 2023 ===================== § INTRODUCTIONThe notion of first-order spectrum was first defined by Scholz <cit.>. Formally, the spectrum of a (first-order) sentence φ (with the equality predicate),denoted by (φ), is the set of cardinalities of finite models of φ. A set is called a spectrum, if it is the spectrum of a first-order sentence. Letdenote the class of all spectra.One of the first and well known problems in finite model theory, called Asser's conjecture,asks whether the complement of a spectrum is also a spectrum <cit.>. It turns out to be equivalent tovs. problem <cit.>.[ is the class of languages accepted by a non-deterministic (possibly multi-tape) Turing machine with run time O(2^kn), for some constant k > 0.] More specifically, it is shown that the classis captured precisely byin the following sense: For every spectrum A, the language that consists of the binary representations of the numbers in A belongs to the class , and vice versa, for every language L ⊆ 1·{0,1}^* , i.e., it consists of only words that start with symbol 1, if L ∈, then the set of integers whose binary representations are in L is a spectrum.For a more comprehensive treatment on the spectrum problem and its history, we refer interested readers to an excellent survey byDurand, Jones, Makowsky and More <cit.>, and the references therein.It is reasonable to say that a definitive solution of Asser's conjecture seems still far away. Thus, it is natural to consider the spectra of some restricted classes of first-order logic. Fagin <cit.> was the first to notice that to settle Asser's conjecture, it is sufficient to consider only first-order logic over graphs. More formally, he showed thatfor every spectrum A, there is a positive integer k>0 such that{n^k | n ∈ A} is the spectrum of a sentence using only one binary relation symbol. Implicitly, it implies that if there is a spectrum whose complement is not a spectrum, then there is such a spectrum of first-order sentence using only one binary relation <cit.>, i.e., Asser's conjecture can be reduced to first-order sentences over graphs.Durand and Ranaivoson <cit.> considered the class of spectra of sentences using only unary function symbols and proved that it is included in the class of spectra of sentences using only one binary relation. In particular, they established that the spectra of sentences using only unary function symbols are exactly the spectra of sentences using one binary relation whenthe models for the latter are restricted to directed graphs of bounded outdegree. They also showed that there is a sentence φ using two unary functions such thatthe language {1^n | n ∈(φ)} is -complete. That two unary functions are necessary to obtain an -complete language is shown immediately by Durand, Fagin and Loescher <cit.>, where they show that the spectrum of a first-order sentence using only one unary function symbol is a semilinear set.Complementing Fagin's result, we showed that Asser's conjecture can be reduced to sentences using only three variables and multiple binary relations <cit.>. The three variable requirement seems to be optimal, as we also showed that the class of the spectra of sentences using two variables and counting quantifiers is precisely the class of semilinear sets and closed under complement <cit.>. In fact, we essentially showed that models of two-variable logic with counting are simply collections of regular bipartite graphs. In this paper we present the following result.For every sentence Φ using at least three variables over binary relation symbols R_1,…,R_m, there is a sentence Φ' over a single binary relation symbol E that uses the same number of variables as Φ such that:(Φ') ={pn+q | n ∈(Φ)},for some integers pand q.Moreover, every model of Φ' is an undirected bipartite graph. Since addition, subtraction, multiplication and division by constants can be computed in linear time (in the length of the binary representation of the input number),the spectra of Φ and Φ' do not differ complexity-wise. Combined with our earlier result <cit.> that Asser's conjecture can be reduced three variable sentences with binary relations, Theorem <ref> immediately implies that Asser's conjecture can be further reduced to three variable sentences using onlyone binary relation with models being restricted to bipartite graphs. It is stated formally as Corollary <ref> below.The following two sentences are equivalent. =0pt* The class of first-order spectra is closed under complement.* The complement of every spectrum of first-order sentence using only three variables whose models are allundirected bipartite graphs is also a spectrum.Note that Corollary <ref> strengthens the result by Fagin <cit.>which states that Asser's conjecture can be reduced to sentences (with arbitrary number of variables) over graphs. We also note the difference between Theorem <ref> and the result by Durand and Ranaivoson <cit.> mentioned above. In <cit.>, multiple unary functions are encoded using only one binary relation (with the graphs being restricted to those with bounded outdegree), whereas in Theorem <ref>, multiple binary relations are encoded with one binary relation (albeit with linear blowup in the size of the model).At this point, it is natural to ask whether every spectrum isthe spectrum of a sentence over graphs, i.e., a sentence using only one relation symbol of arity 2. It turns out that a positive answer to this question will imply the separation of a long standing open problem: ⊊, and thus, ⊊, as stated formally in Remark <ref> below.Let k denote the class of spectra of sentences using only relational symbols of arity k. We will prove the following: If =k, for some integer k, then ⊊, and hence, ⊊.First, we show that k⊆2^kn, where the input integer is written in binary form. Let φ be ansentence using relations of arity at most k. To show that (φ) ∈2^kn,let w be the input word that represents integer N in binary form. Each relation R of arity k with domain {1,…,N} takes N^k = O(2^k|w|) space. So, each modelwith relations of arity at most k takes O(2^k|w|) space. Checking whethersatisfies φ takes additional O(|w|) space. To check whether φ has a model of cardinality N, one can simply check one by one every possible model with domain {1,…,N},each of which takes O(2^k|w|) space. Therefore, (φ)∈2^kn.Now, by the space hierarchy theorem <cit.>, 2^kn⊊. Thus, if = k, for some k, then ⊊, and by standard padding argument, it implies ⊊.§.§ Related work. It is already noted before that first-order logic over arbitrary vocabulary is too vast a logic to work on. A lot of work has been done to classify spectra based on the vocabulary, notably on the arity of the relation and function symbols. We will mention some of them here. Interested readers can consult the cited papers and the references therein.Let N^k denote the class of sets of positive integers (written in unary form) accepted by non-deterministic multi-tape Turing machine in time O(N^k), where N is the input integer. Lynch <cit.> showed that N^k⊆k, for every k≥ 2. When k=1, the addition operator is required, i.e., N⊆1,+. The converse of Lynch's theorem is still open.Grandjean, Olive and Pudlák established the variable hierarchy for spectra of sentences using relation and functionsymbols <cit.>. Let N^k denote the class of sets of positive integers accepted by a non-deterministic RAM in time O(N^k),and N is the input integer. In his series of papers, Grandjean showed that the class N^k is precisely the class of the spectraof first-order sentences written in prenex normal form using only universal quantifiers and k variables with vocabulary consisting of relation and function symbols ofarity k <cit.>. By Skolemisation, this result leads to the fact that for every integer k ≥ 1, the class of spectra of first-order sentences using relation andfunction symbols and k variables is precisely N^k. See also <cit.>.Grandjean <cit.> also showed that the class N is precisely the class of spectra of sentences of the form ∀ x φ, where φ is quantifier free and uses only unary functions. Note that to express that a relation is a function requires three variables. Since composition of functions can also be expressed with three (reusable) variables, it implies that N is a subclass of the class of spectra involving only binary relations and three variables. By padding argument, it also implies that if Asser's conjecture is negative, it suffices to consider only three-variable sentences using only binary relations. This is similar to our result in <cit.>.A result similar to Theorem <ref> was also obtained by Durand, et. al. <cit.> where they showed that if S is a spectrum involving k unary functions, then the set {kn | n ∈ S} is a spectrum involving only two unary functions. There is a strong evidence that the linear blow-up is unavoidable <cit.>. Durand and Ranaivoson <cit.> also showed that every spectrum can be transformed (with polynomial blowup) to a spectrum involving only unary functions, i.e., if S is a spectrum involving k-ary functions,then {n^k | n ∈ S} is a spectrum involving only unary functions. Durand's thesis <cit.> is rich with results in this direction.Recently we also showed that there is a strict hierarchy of spectra based on the number of variables used. That is, more variables yield larger class of spectra <cit.> when the vocabulary is restricted to relational symbols. §.§ Organization.In the next section we will present the proof of Theorem <ref>, and we conclude with some remarks in Section <ref>. § PROOF OF THEOREM <REF> In this paper, by graph we always mean undirected graph. For a graph G=(V,E) and a subset V'⊆ V, we denote by G[V'] the subgraph of G induced by the subset V'.Let R_1,…,R_m be binary relation symbols. For k≥ 0, we denote by ^k[R_1,…,R_m] the class offormulas using k variables and binary relation symbols R_1,…,R_m. A formula is a sentence, if it has no free variable. A formula is always written as φ(z_1,…,z_l) to indicate that z_1,…,z_l are the free variables in φ. An interpretation is written in a standard way = (A,R^_1,…,R^_m), where A is a finite domain and each R^_i ⊆ A× A, for each i=1,…,m. As usual, φ denotes that the sentence φ holds in . For a formula φ(z_1,…,z_l), and for i_1,…,i_l ∈ A, we write that φ(i_1,…,i_l) holds in , if φ(z_1,…,z_l) holds inby substituting each z_j with i_j, for every j=1,…,l.We reserve the symbol E to be a binary relation symbol that we insist to be always interpreted by a symmetric relation. In the same way, we let ^k[E] to be the class offormulas using k variables and relation symbol E. All models of sentences from ^k[E] are graphs, so we will use the standard notation G=(V,E)φ, or simply Gφ, to denote that φ holds in G.The following Lemma <ref> immediately implies Theorem <ref>.Let k≥ 3. For every Φ∈^k[R_1,…,R_m], there is Φ' ∈^k[E] such that the following holds. =0pt* For every Φ, there is G=(V,E)Φ' such that |V|=(m+3)|A|+8m+2.* For every G=(V,E)Φ', there is Φ such that |V|=(m+3)|A|+8m+2.Moreover, all models of Φ' are bipartite graphs.The rest of this section is devoted to the proof of Lemma <ref>. We fix a sentence Φ∈^k[R_1,…,R_m], and we assume that z_1,…,z_k are the variables used in Φ. Without loss of generality, we also assume that m≥ 3. Moreover, we assume that Φ implies ∀ xR(x,x), for every R∈{R_1,…,R_m}. That is, in every model Φ, every relation R^ does not contain self-loop. Note that self-loops can be represented by non self-loops, i.e., by adding a new binary relation R' for each R∈{R_1,…,R_m} and replacing every atomic formula R(x,y) with R(x,y)∨∃ y R'(x,y). The intuition is that in every model Φ, a self-loop (u,u)∈ R^ is represented by (u,v)∈R'^ for some v≠ u. We will first describe the main idea of our proof. The details will be presented immediately after. Letbe the graph depicted in Figure <ref>. It has 8m+2 vertices, denoted by u_1,…,u_4m+1 and w_1,…,w_4m+1, with the u_i's being those on the left hand side, and the w_i's being those on the right hand side. The edges are (u_i,w_i), for each i=1,…,4m+1, and (u_i,u_i+1), for each i=1,…,4m. Throughout this paper, we will always write U and W to denote the sets {u_1,…,u_4m+1} and W={w_1,…,w_4m+1}, respectively.Letbe the graph depicted in Figure <ref>. It has m+3 vertices and m+2 edges. The vertices are denoted by d^P, d^Q, d^S, d^R_1,…,d^R_m, where d^P is adjacent to all of d^Q,d^R_1,…,d^R_m and d^Q is adjacent to d^S. Our intention is to construct Φ' such thatevery model Φ with A={1,…,n} is represented by a graph G=(V,E) Φ', where there is a partition V=V_0∪ V_1∪⋯∪ V_n and the following holds. =0pt* G[V_0] is isomorphic to .* G[V_i] is isomorphic to , for each i=1,…,n.Intuitively, each element i ∈ A is represented by G[V_i]. For simplicity, we will assume that G[V_0] isitself, i.e., V_0=U∪ W. We also denote the vertices in V_i by i^P,i^Q, i^S, i^R_1,…,i^R_m which correspond respectively to vertices d^P, d^Q, d^S, d^R_1,…,d^R_m in . Each tuple (i,j)∈ R_l^ will then be represented by the edge (i^R_l,j^S) in G. See Figure <ref> for an illustration. In order to achieve our intention, we differentiatethe vertices i^P, i^S,i^R_1,…,i^R_m by defining them according to their connections with the vertices in U. Of course, the vertices in U have to be definable, as well.We first declare the definition of the set U.(_1) A vertex u ∈ U if and only if it has degree at least 2 and exactly one of its neighbour has degree 1. The following are the properties of the set U to be satisfied.(_1) Every vertex of degree 1 is adjacent to a vertex in U.(_2) There are exactly two vertices in U that are adjacent to exactly one vertex in U. More formally, |X|=2, where X is the following set.X ={u ∈ U |there is exactly one vertex v ∈ Us.t. (u,v)∈ E } (_3) Vertices in U form a tree with diameter ≤ 4m.(_4) Between the two vertices in the set X, there is a path ℘ of length 4m that consists of only vertices in U. Property _1 states that every vertex of degree 1 is adjacent to one in U. Properties _2 and _3 state thatthe vertices in U form a tree with exactly two leaf nodes and diameter at most 4m, which implies that it is a line graph. Property _4 states that the line graph has exactly 4m+1 vertices.We will show that _1 and _1–_4 can be defined with first-order formulas using only three variables. Moreover, we will also show that for every graph G=(V,E) that satisfies _1–_4 with the set U being defined as in _1, there is a subset V_0⊆ V such that the following holds. =0pt* G[V_0] is isomorphic to .* If a vertex v∈ V is either of degree 1 or such that v ∈ U, then v ∈ V_0. Now, if we assume that V_0 = U ∪ W, and if we denote the vertices in U by u_1,…,u_4m+1, we can define u_1 and u_4m+1 as the end vertices of the line graph G[U], whereas for each i=2,…,4m, vertex u_i is defined as the vertex with distance i-1 and 4m+1-i to u_1 and u_4m+1, respectively. At this point, note that since we insist the interpretation of E to be symmetric,our definition does not distinguish between u_i and u_4m+2-i, for each i=1,…,4m+1.The following are the definitions of the vertices i^P,i^Q, i^S,i^R_1,…,i^R_m.(_2) A vertex u ∈{1^P,…,n^P} if and only if it is adjacent to exactly one of u_1 or u_4m+1, and it is not adjacent to any other vertex in U.(_3) A vertex u ∈{1^Q,…,n^Q} if and only if it is adjacent to exactly one of u_2 or u_4m, and it is not adjacent to any other vertex in U.(_4) A vertex u ∈{1^S,…,n^S} if and only if it is adjacent to exactly one of u_3 or u_4m-1, and it is not adjacent to any other vertex in U.(_5) For each R_l ∈{R_1,…,R_m}, a vertex u ∈{1^R_l,…,n^R_l} if and only ifit is adjacent to exactly one of u_2l-1 or u_4m+1-2(l-1), and it is not adjacent to any other vertex in U. Again, we will show that all of them can be defined with first-order formulas using only three variables. Finally, to facilitate a correct representation of each relation R_lwith [E] formulas, we declare the following additional properties, which can also be defined using only three variables.(_5) The vertices i^P,i^Q, i^S,i^R_1,…,i^R_m form a graph that is isomorphic tovia the mapping (i^P,i^Q, i^S,i^R_1,…,i^R_m)↦ (d^P,d^Q, d^S,d^R_1,…,d^R_m).(_6) If there is an edge between the vertices in {i^P,i^Q, i^S,i^R_1,…,i^R_m} and those in {j^P,j^Q, j^S,j^R_1,…,j^R_m}, where i≠ j, then it is an edge between i^R_l and j^S, for some R_l ∈{R_1,…,R_m}. With the definitions of vertices as in _1–_5, we will show that for every graph G=(V,E) that satisfies properties _1–_6, there is a partition V=V_0∪ V_1 ∪⋯∪ V_n such that the following holds. =0pt* G[V_0] is isomorphic to .* G[V_i] is isomorphic to , where V_i ={i^P,i^Q, i^S,i^R_1,…,i^R_m}, for each i=1,…,n.* If there is an edge between V_i and V_j, for some 1 ≤ i≠ j≤ n, then it is (i^R_l,j^S).As mentioned earlier, each relation R_l can then be encoded in G by representing each tuple (i,j)∈ R_l^ with the edge (i^R_l,j^S) in G. The rest of this section will be devoted to the details of the definitions of _1–_5 and _1–_6,as well as, the sentence Φ'. We divide them into five main steps. The first step is for _1 and _1–_4, and the second step is for _2–_5. The third and fourth step are for _5 and _6, respectively. Finally, in the fifth step, we present the construction of the desired Φ', where Φ' uses the same number of variables as Φ.§.§ Step 1: Three variable definitions for F_1 and P_1–P_4.We will need a few auxiliary formulas. They are all defined using three variables x,y,z, which can be replaced with three arbitrary variables from among z_1,…,z_k.The formula Ψ_=1(x) below defines those with degree 1.Ψ_=1(x) :=∃ y[ E(x,y)∧ ∀ z[ E(x,z)⇒ y=z] ]Next, the formula Ψ_U(x) below defines vertices in U as stated in _1.Ψ_U(x) :=Ψ_=1(x)∧∃ y[ E(x,y)∧ Ψ_=1(y) ]That is, Ψ_U(v) holds if and only ifits degree is not 1 and it is adjacent to a vertex with degree 1. To avoid repetition, by abuse of terminology, when explaining the intuition of a formula, we always write a set U to mean the vertices on which Ψ_U(x) holds. We can define property _1 with the following sentence.Ψ__1:=∀ x ∀ y [ [Ψ_=1(x)∧ E(x,y)]⇒ Ψ_U(y) ] To define the rest, we will need the following two auxiliary formulas. =0pt* The formula Ψ_,U(x):Ψ_,U (x) :=Ψ_U(x) ∧∃ y [ Ψ_U(y) ∧ E(x,y) ∧ ∀ z [ [Ψ_U(z) ∧ E(x,z)]⇒ y=z] ]That is, Ψ_,U(v) holds if and only if v is in U and adjacent to exactly one of the vertices in U. This is intended to define the endpoints of the line graph formed by vertices in U. * For an integer n≥ 0, the formula Ψ_U,n(x,y):Ψ_U,0(x,y) := x=y∧ Ψ_U(x)Ψ_U,n(x,y) :=Ψ_U(x)∧ Ψ_U(y)∧∃ z[ Ψ_U(z) ∧ E(x,z)∧ Ψ_U,n-1(z,y) ]That is, Ψ_U,n(v_1,v_2) holds if and only ifΨ_U(v_1),Ψ_U(v_2) hold and there is a path of length n that consists of only vertices in U.Now, the sentences Ψ__2, Ψ__3 and Ψ__4 that define _2, _3 and _4, respectively, are as follows.Ψ__2:=∃ x ∃ y [ Ψ_,U(x)∧ Ψ_,U(y)∧∀ z [Ψ_,U(z) ⇒ [ z=x∨ z=y]]]Ψ__3:=∀ x ∀ y [ [ Ψ_U(x)∧ Ψ_U(y)]⇒⋁_n=1^4mΨ_U,n(x,y)]∧ ⋀_n=1^4m∀ x ∀ y [ [Ψ_U(x)∧ Ψ_U(y)∧ Ψ_U,n(x,y)]⇒ ⋀_l≠ nand 1≤ l ≤ 4mΨ_U,l(x,y)]Ψ__4:=∀ x ∀ y[ [ Ψ_,U(x)∧ Ψ_,U(y)]⇒ Ψ_U,4m(x,y)]Intuitively, the first line of Ψ__3 states that the vertices in U form a graph with diameter ≤ 4m, while the second line states that the distance between two vertices in U is unique. Thus, Ψ__3 states that vertices in U form a tree with diameter ≤ 4m. The sentence Ψ__4 states that distance between the two leaf nodes is 4m. Now, Ψ__2 states that there are only two leaf nodes. So, altogether Ψ__2∧Ψ__3∧Ψ__4 states that the set U forms a line graph of 4m+1 vertices. Combining all these with Ψ__1, we obtain thatevery model of Ψ__1∧⋯∧Ψ__4 contains a subgraph isomorphic to , as stated formally below.For every graph G=(V,E), the following are equivalent. (a) GΨ__1∧Ψ__2∧Ψ__3∧Ψ__4.(b) There is a subset V' ⊆ V such that G[V'] is isomorphic to . Moreover, if a vertex v∈ V is either of degree 1 or such that Ψ_U(v) holds, then v ∈ V'.Thedirection that (b) implies (a) is straightforward. So we prove that (a) implies (b). Assume that G=(V,E)Ψ__1 ∧ Ψ__2 ∧ Ψ__3 ∧ Ψ__4.Let U' be the set {u∈ V |Ψ_U(u)holds in G}. The sentence Ψ__3 implies that G[U'] is a tree of diameter ≤ 4m, whereas the sentence Ψ__2 implies that G[U'] has only two leaf nodes. So, altogether, they imply that G[U'] is a line graph of at most 4m+1 vertices. The sentence Ψ__4 implies that it is a line graph with exactly 4m+1 vertices.Next, let W' be the set {w ∈ V |(w)=1}. Thus, if we pick V'=U'∪ W', it follows immediately that G[V'] is isomorphic to . By Ψ__1, it is trivial that if v∈ V is such that either (v)=1 or that Ψ_U(v) holds, then v ∈ V'.§.§ Step 2: Three variable definitions for F_2–F_5.The formulas Ψ_P(x), Ψ_Q(x), Ψ_S(x) and Ψ_R_l(x), for each R_l ∈{R_1,…,R_m},below defines the vertices i^P's, i^Q's, i^S's and i^R_l's, respectively,as stated in _2–_5.Ψ_P(x):=∃ y [Ψ_,U(y)∧∀ z [ Ψ_U(z) ⇒ [ E(x,z)y=z]] ]Ψ_Q(x):=∃ y [Ψ_,U(y)∧∀ z [ Ψ_U(z) ⇒ [ E(x,z)Ψ_U,1(y,z)]] ]Ψ_S(x):=∃ y [Ψ_,U(y)∧∀ z [ Ψ_U(z) ⇒ [ E(x,z)Ψ_U,2(y,z)]] ]Ψ_R_l(x):=∃ y [Ψ_,U(y)∧∀ z [ Ψ_U(z) ⇒ [ E(x,z)Ψ_U,2l-1(y,z)]] ] §.§ Step 3: Three variable definition for P_5. Intuitively, the sentence Ψ__5 that defines _5 states the following:For every vertex x such that Ψ_P(x) holds, there are vertices y,z,s_1,…,s_m such that the following is true. =0pt* x,y,z,s_1,…,s_m form a graph isomorphic to .* Ψ_Q(y), Ψ_S(z),Ψ_R_1(s_1),…,Ψ_R_m(s_m) all hold.Such sentence can be trivially written using m+3 variables. However, since each of the vertices x,y,z,s_1,…,s_m have distinguished definitions and the distance between them are all bounded by a fixed length, three variables are sufficient.Before we proceed to the details, we need the following auxiliary formula. For every α,β∈{P,Q,S,R_1,…,R_m}, we define the following formula:Ψ_α,β(x,y) :=Ψ_α(x)∧ Ψ_β(y)∧∃ z [ Ψ_γ(z)∧ E(x,z)∧ E(z,y)],where γ is defined according to α and β as follows. =0pt* γ=Q, when either (α,β)=(P,S) or (α,β)=(S,P).* γ=P, when either (α,β)=(R_l,Q) or (α,β)=(Q,R_l), for some R_l ∈{R_1,…,R_m}.* γ=P, for every α,β∈{R_1,…,R_m} and α≠β.We let γ undefined for all the other combinations of α and β. Intuitively, Ψ_α,β(x,y) indicates that x and y are the vertices inwhere Ψ_α and Ψ_β hold, respectively, and that Ψ_γ holds in their middle vertex.Now, the sentence Ψ__5 is the conjunction of the following sentences, which for readability, are written in plain English. =0pt* For every vertex x such that Ψ_P(x) holds, the following is true. =0pt* x is adjacent to exactly one vertex y where Ψ_Q(y) holds.* For every R_l ∈{R_1,…,R_m}, x is adjacent to exactly one vertex y where Ψ_R_l(y) holds.* There is exactly one vertex y such that Ψ_P,S(x,y) holds and moreover, E(x,y) does not hold.* For every R_l ∈{R_1,…,R_m}, if y and z are vertices adjacent to x such thatΨ_Q(y) andΨ_R_l(z)hold,then E(y,z) does not hold.* For every R_l,R_l'∈{R_1,…,R_m}, if y and z are vertices adjacent to x such thatΨ_R_l(y) andΨ_R_l'(z)hold,then E(y,z) does not hold.* For every vertex x such that Ψ_Q(x) holds, the following is true. =0pt* x is adjacent to exactly one vertex y where Ψ_P(y) holds. * x is adjacent to exactly one vertex y where Ψ_S(y) holds. * For every R_l ∈{R_1,…,R_m}, there is exactly one vertex y such that Ψ_Q,R_l(x,y) holds and moreover, E(x,y) does not hold.* For every vertex x such that Ψ_S(x) holds, the following is true. =0pt* x is adjacent to exactly one vertex y where Ψ_Q(y) holds. * There is exacly one vertex y such that Ψ_S,P(x,y) holds, and moreover, E(x,y) does not hold. * If y and z are vertices such thatΨ_Q(y), E(x,y),and Ψ_Q,R_l(y,z)hold, for some R_l ∈{R_1,…,R_m},then E(x,z) does not hold.* For every R_l ∈{R_1,…,R_m}, for every vertex x such that Ψ_R_l(x) holds, the following is true. =0pt* x is adjacent to exactly one vertex y where Ψ_P(y) holds.* There is exactly one vertex y such that Ψ_R_l,Q(x,y) holds and moreover, E(x,y) does not hold.* If y and z are vertices such thatΨ_P(y), E(x,y),and Ψ_P,S(y,z)hold,then E(x,z) does not hold.Now, consider the following sentence.Ψ_0 :=Ψ__1 ∧ Ψ__2 ∧ Ψ__3 ∧ Ψ__4 ∧ Ψ__5 ∧ ∀ x [ E(x,x)]∧∀ x [ Ψ_=1(x)∨ ⋁_α∈{U,P,Q,S,R_1,…,R_m}Ψ_α(x)].We have the following lemma. For every graph G=(V,E)Ψ_0, there is a partition V=V_0∪ V_1∪⋯∪ V_n such that the following holds. =0pt* G[V_0] is isomorphic to .* For each i=1,…,n, G[V_i] is isomorphic to , and for every α∈{P,Q,S,R_1,…,R_m}, there is exactly one node v ∈ V_i such that Ψ_α(v) holds.Let G=(V,E)Ψ_0. Obviously, it does not contain any self-loop. By Lemma <ref>, there is V_0 such that G[V_0] is isomorphic to . Let K = {v ∈ V|Ψ_P(v) holds}. By Ψ__5, for every v∈ K, there is a set of vertices V_v = {u_1^v,…,u_m+2^v} such that the following holds. =0pt* Ψ_R_1(u_1^v), …,Ψ_R_m(u_m^v), Ψ_Q(u_m+1^v), Ψ_S(u_m+2^v) hold. * G[{v}∪ V_v] is isomorphic to .Suppose K = {v_1,…,v_n}. By Ψ__5 again, we have that {v_i}∪ V_v_i and {v_j}∪ V_v_j are disjoint, whenever v_i ≠ v_j.Now, for every vertex v∈ V, either (v)=1 or there is a α∈{U,P,Q,S,R_1,…,R_m} such that Ψ_α(v) holds. Moreover, it is not possible thatΨ_α(v) and Ψ_β(v) hold, for different α,β∈{U,P,Q,S,R_1,…,R_m}. By Lemma <ref>, if v is of degree 1 or that Ψ_U(v) holds, then u∈ V_0. Otherwise, v ∈ V_i, for some i=1,…,n. Thus, V is partitioned into V_0∪ V_1∪⋯∪ V_n. This completes our proof.§.§ Step 4: Three variable definition for P_6. Before we define the sentencefor _6, we need the following terminology. Let G=(V,E)Ψ_0. We say that two vertices u,v∈ V are in the same -component, if there is V'⊆ V such that the following holds. =0pt* u,v ∈ V'.* G[V'] is isomorphic to .* For every α∈{P,Q,S,R_1,…,R_m}, there is exactly one w ∈ V' such that Ψ_α(w) holds.We can define a three-variable formula Ψ_(x,y) such that Ψ_(x,y) holds if and only if x and y are in the same -component. This can be done as follows. Suppose α = S and β=R_l, and that Ψ_α(x) and Ψ_β(y) hold. Then, x and y are in the same -component is equivalent to stating that there is z such that E(x,z), Ψ_Q(z) and Ψ_Q,R_l(z,y) hold. We can enumerate similar formulas for every possible α and β, and conjunct them all to obtain a formula Ψ_(x,y) that asserts whether x and y are in the same -component.Now, the sentence Ψ__6 that defines _6 states as follows. For every adjacent vertices x and y, if they are not in the same -component, then for some R_l ∈{R_1,…,R_m}, either one of the following holds. =0pt* Ψ_S(x) and Ψ_R_l(y) hold.* Ψ_R_l(x) and Ψ_S(y) hold. The following lemma is immediate from Lemma <ref> and the intended meaning of Ψ__6. For every graph G=(V,E), if GΨ_0 ∧Ψ__6, then V can be partitioned into V=V_0∪ V_1 ∪…∪ V_n such that the following holds. =0pt* G[V_0] is isomorphic to .* For each i=1,…,n, G[V_i] is isomorphic to , and for every α∈{P,Q,S,R_1,…,R_m}, there is exactly one node v ∈ V_i such that Ψ_α(v) holds.* If there is an edge (u,v) such that u ∈ V_i and v∈ V_j, for some 1≤ i ≠ j ≤ n, then either Ψ_S(v), Ψ_R_l(u) hold or Ψ_S(u), Ψ_R_l(v) hold, for some R_l ∈{R_1,…,R_m}.Note also that every graph G=(V,E) that satisfies Ψ_0∧Ψ__6 is indeed a bipartite graph. Using the same notation as in Lemma <ref>, weassume that G[V_0] isitself. Furthermore, we also denote by V_i = {i^P,i^Q,i^S,i^R_1,…,i^R_l}, where the mapping (i^P,i^Q,i^S,i^R_1,…,i^R_m)↦ (d^P,d^Q,d^S,d^R_1,…,d^R_m) is an isomorphism from G[V_i] to . Then, G is a bipartite graph with the partition V= V'∪ V”, whereV' ={u_1,u_3,…,u_4m+1}∪{w_2,w_4,…,w_4m}∪{i^Q,i^R_1,…,i^R_l| i = 1,…,n}V”={u_2,u_4,…,u_4m}∪{w_1,w_3,…,w_4m+1}∪{i^P,i^S| i = 1,…,n}§.§ Step 5: The construction of Phi'. First, for each formula φ() of Φ, where =(z_1,…,z_t) and t≥ 3, we construct φ() with the same free variablesinductively as follows. Base case φ() is an atomic formula R_l(x,y), i.e., =(x,y) and x,y ∈{z_1,…,z_t}. Then,φ(x,y) :=Ψ_P(x)∧ Ψ_P(y)∧ ∃ z=1.4pt1.6[ [ Ψ_Q(z) ∧ E(y,z) ∧; ∃ y [ [ Ψ_S(y)∧E(z,y) ∧∃ z[ [ Ψ_R_l(z) ∧E(x,z) ∧ E(y,z) ]] ]] ]]The variable z is such that z∈{z_1,…,z_t} and z≠ x,y. Note also that variables y and z are being reused.The intuitive meaning of φ(x,y) is as follows. Assuming that Ψ_P(x) and Ψ_P(y) hold, φ(x,y) states that there are three vertices v,v',v” such that the following holds. =0pt* Ψ_Q(v), Ψ_S(v'), Ψ_R_l(v”) hold.* (y,v),(v,v'),(x,v”) and (v”,v') are edges. In a similar way, when φ() is an atomic formula x=y, then,φ(x,y) :=Ψ_P(x)∧ Ψ_P(y)∧ x=y.Induction stepφ() :==1.4pt1.2{[ φ_1() ∧φ_2(),φ()φ_1()∧φ_2();φ_1(), φ() φ_1(); ∃ xΨ_P(x) ∧φ_1(x,), φ()∃ xφ_1(x,) ]. Note that Φ' uses the same number of variables as Φ.We have the following lemma which states that Φ and Φ' are equi-satisfiable. For every formula φ(z_1,…,z_t)∈^k[R_1,…,R_m], the following holds. =0pt* For every structure = ⟨ A, R_1^,…,R_m^⟩, for every i_1,…,i_t∈ A such that φ(i_1,…,i_t),there is a graph G=(V,E) and u_1,…,u_t∈ V such thatG Ψ_0∧ Ψ__6 ∧ φ(u_1,…,u_t).* Vice versa,for every graph G=(V,E) and for every u_1,…,u_t∈ V such thatG Ψ_0∧ Ψ__6 ∧ φ(u_1,…,u_t),there is a structure = ⟨ A, R_1^,…,R_m^⟩ and i_1,…,i_t∈ A such that φ(i_1,…,i_t). For a structure = ⟨ A, R_1^,…,R_m^⟩, where A = {1,…,n}, let G=(V,E) be the following graph. =0pt* V= U ∪ W ∪ V_1 ∪⋯∪ V_n, where each V_i = {i^P,i^Q,i^S,i^R_1,…,i^R_m} and U={u_1,…,u_4m+1} and W={w_1,…,w_4m+1}.* G[U ∪ W] is isomorphic toand G[V_i] is isomorphic to , for each i=1,…,n.* Every vertex u ∈{1^P,…,n^P} is adjacent to u_1, and not to any other vertex in U.* Every vertex u ∈{1^Q,…,n^Q} is adjacent to u_2, and not to any other vertex in U.* Every vertex u ∈{1^S,…,n^S} is adjacent to u_3, and not to any other vertex in U.* For each R_l ∈{R_1,…,R_m}, every vertex u ∈{1^R_l,…,n^R_l} is adjacent to u_2l-1, and not adjacent to any other vertex in U.* For each R_l ∈{R_1,…,R_m}, for each (i,j)∈ R_l^, we have an edge (i^R_l,j^S) in G.By straightforward induction on formula φ(z_1,…,z_t), we can establish the following. For every i_1,…,i_t ∈ A:φ(i_1,…,i_t)GΨ_0∧Ψ__6∧φ(i_1^P,…,i_t^P).Vice versa, let G=(V,E)Ψ_0 ∧Ψ__6. Let V_0∪ V_1∪⋯∪ V_n be the partition of V, where V_i={i^P,i^Q,i^S,i^R_1,…,i^R_m}, for each i=1,…,n, as in Lemma <ref>. We can define a structure = ⟨ A, R_1^,…,R_m^⟩ as follows. =0pt* A={1,…,n}.* For each R_l ∈{R_1,…,R_m}, for every edge (i^R_l,j^S) in G, we have (i,j)∈ R_l^.Again, by straightforward induction on formula φ(z_1,…,z_t), we can establish the following. For every i_1,…,i_t ∈ A:φ(i_1,…,i_t)GΨ_0∧Ψ__6∧φ(i_1^P,…,i_t^P).This completes our proof. To complete our proof of Lemma <ref>, we set Φ' as follows.Φ' :=Ψ_0∧ Ψ__6 ∧ ΨThat Φ' is the desired sentence follows immediately from Lemmas <ref> and <ref>.Note also that for GΨ_0∧Ψ__6∧Φ, the additional edge needed to represent the relation R_l^(i,j) in G is between i^R_l and j^S, thus the partition V'∪ V” as defined in Equations (<ref>) and (<ref>) still preserves the bipartite-ness of G. § CONCLUDING REMARKSIn this paper we have shown that the spectrum of a sentenceusing at least three variables and binary relation symbols is linearly proportional to the spectrum of a sentence using the same amount of variables and only one symmetric binary relation symbol E, whose models are all bipartite graphs (Theorem <ref>). Building from our previous work <cit.>, we obtain that to settle Asser's conjecture, it is sufficient to consider only sentences using only three variables on bipartite graphs (Corollary <ref>), i.e., the following two sentences are equivalent. =0pt* The class of first-order spectra is closed under complement.* For every three-variable sentence φwhose models are all undirected bipartite graphs, the complement of (φ) is also a spectrum. The proof of Corollary <ref> follows closely the one in <cit.>. The direction from the first to the second bullet is trivial. The other direction is as follows. Define the following class .:={[(ϕ) [ ϕ; ] ]} :={[ - S S ∈ ]}Suppose that the second bullet holds, i.e., ⊆. Let S be a set of integers such that A∈2^n, where the input number is written in binary form. In <cit.>, we have already shown that S is the spectrum of a three-variable sentence using only binary relations. By Theorem <ref>, there is p and q such that the set S' ={px+q | x ∈ S}∈. By the assumption that ⊆, we have that -S' ∈=. Since addition/subtraction/multiplication/division by constant can be performed in linear time, we have -S ∈. By padding argument, this implies that for every set S ∈, the complement - S also belongs to . Then, Corollary <ref> follows immediately from =.Note that Corollary <ref> reduces Asser's conjecture in two directions: First, it reduces the number of variables to three, and second, it reduces to sentences whose models are all undirected bipartite graphs. It should be remarked that bipartite-ness is not first-order definable, thus, it will be interesting to obtain a characterization of sentences whose models are all bipartite graphs. We leave this as future work.It will also be interesting to show whether the linear blowup in Theorem <ref> is necessary. As pointed out in the introduction, Durand, et. al. showed that there is a strong evidence thatcollapsing the class of spectra involving arbitrary number of unary functions to a fixed number of unary functions is likely to be difficult <cit.>. Similar evidence for Theorem <ref> will be interesting.§.§ AcknowledgementThe authors would like to thank the anonymous referees for their excellent comments. They are also grateful to Arnaud Durand and Etienne Grandjean for their helpful comments that greatly improve the earlier version of this paper. The proof in Remark <ref>, which is simpler than our original proof, is due to them. The second author acknowledges the generous financial support ofTaiwan Ministry of Science and Technology under the grant no. 105-2221-E-002-145-MY2.plainurl 10Asser55 G. Asser. Das repräsentenproblem in prädikatenkalkül der ersten stufe mit identität. Zeitschrift für mathematische Logik und Grundlagen der Mathematik, 1:252–263, 1955. Durand96 A. Durand. Hiérarchies de Définissabilité Logique au Second Ordre,Ph.D. thesis, Université de Caen, 1996.DurandFL97 A. Durand, R. Fagin, and B. Loescher. Spectra with only unary function symbols. In Proceedings of CSL, pages 189–202, 1997.DJMM12 A. Durand, N. D. Jones, J. A. Makowsky, and M. More. Fifty years of the spectrum problem: survey and new results. Bulletin of Symbolic Logic, 18(4):505–553, 2012.DurandR96 A. Durand and S. Ranaivoson. First-order spectra with one binary predicate. Theoretical Computer Science, 160(1&2):305–320, 1996.Fagin73 R. Fagin. Contributions to the model theory of finite structures. PhD thesis, University of California, Berkeley, 1973.Fagin R. Fagin. Generalized first-order spectra and polynomial-time recognizable sets. In Proceedings of SIAM-AMS Complexity of Computation, volume 7, 1974.Fagin75 R. Fagin. A spectrum hierarchy. Zeitschrift für mathematische Logik und Grundlagen der Mathematik, 21:123–134, 1975.Grandjean84 E. Grandjean. The spectra of first-order sentences and computational complexity. SIAM Journal on Computing, 13(2):356–373, 1984.Grandjean85 E. Grandjean. Universal quantifiers and time complexity of random access machines. Mathematical Systems Theory, 18(2):171–187, 1985.Grandjean90 E. Grandjean. First-order spectra with one variable. Journal of Computer and System Sciences, 40(2):136–153, 1990.GrandjeanO04 E. Grandjean and F. Olive. Graph properties checkable in linear time in the number of vertices. Journal of Computer and System Sciences, 68(3):546–597, 2004.JS74 N. Jones and A. Selman. Turing machines and the spectra of first-order formulas. The Journal of Symbolic Logic, 39:139–150, 1974.KT15tocl E. Kopczyński and T. Tan. On the variable hierarchy of first-order spectra. ACM Transactions on Computational Logic, 16(2):17:1–17:12, 2015.KT15sicomp E. Kopczyński and T. Tan. Regular graphs and the spectra of two-variable logic with counting. SIAM Journal on Computing, 44(3):786–818, 2015.Lynch82 J. Lynch. Complexity classes and theories of finite models. Mathematical Systems Theory, 15(2):127–144, 1982.Pudlak75 P. Pudlák. The observational predicate calculus and complexity of computations. Commentationes Mathematicae Universitatis Carolinae, 16:395–398, 1975.Scholz52 H. Scholz. Ein ungelöstes problem in der symbolischen logic. The Journal of Symbolic Logic, 17:160, 1952.SHL65 R. Stearns, J. Hartmanis, and P. Lewis II. Hierarchies of memory limited computations. In FOCS, pages 179–190, 1965.
http://arxiv.org/abs/1706.08691v5
{ "authors": [ "Eryk Kopczynski", "Tony Tan" ], "categories": [ "cs.LO", "F.4.1; F.1.3" ], "primary_category": "cs.LO", "published": "20170627070735", "title": "A note on first-order spectra with binary relations" }
Handling PDDL3.0 State Trajectory Constraints with Temporal Landmarks Eliseo Marzal, Mohannad Babli, Eva Onaindia, Laura Sebastia Universitat Politècnica de València, Camino de Vera s/nE46022-Valencia (Spain){emarzal, mobab, onaindia, lstarin}@dsic.upv.es December 30, 2023 =====================================================================================================================================================================================================Temporal landmarks have been proved to be a helpful mechanism to deal with temporal planning problems, specifically to improve planners performance and handle problems with deadline constraints. In this paper, we show the strength of using temporal landmarks to handle the state trajectory constraints of PDDL3.0. We analyze the formalism of TempLM, a temporal planner particularly aimed at solving planning problems with deadlines, and we present a detailed study that exploits the underlying temporal landmark-based mechanism of TempLM for representing and reasoning with trajectory constraints. § INTRODUCTION In planning, a landmark is a fact that must be true in any solution plan. Since the influential work presented in <cit.> on the use of landmarks in planning, there have been multiple investigations that exploit landmarks for cost-optimal planning <cit.>, satisficing planning <cit.> or more recently on goal recognition <cit.>.Although the use of landmarks in temporal contexts has been less explored, there are two works that address the exploitation of temporal landmarks in planning. In <cit.>, the authors define temporal fact landmarks (facts that must hold between two time points) and temporal action landmarks, which state that some event (the start or end of an action) must occur at some time point. The temporal information is captured in a Simple Temporal Network <cit.> over the symbolic time points associated with each landmark. This approach is aimed at deriving temporal landmarks and constraints from planning problems and integrating them into domain-independent temporal planners in order to improve their performance. Overall, the results reveal that there is some benefit from using temporal landmarks in concurrent temporal planning problems like the TMS domain and in some non-temporally expressive domains in terms of solution quality.Differently, the approach presented in <cit.>, called TempLM, studies the use of temporal landmarks for solving planning problems with deadline constraints. In this approach, a temporal landmark is defined as a fact that must be achieved in a solution plan to satisfy the problem deadline constraints. TempLM discovers the causal (non-temporal) landmarks of the problem and then exploits deadlines to infer new (temporal) landmarks that must be accomplished to meet the deadlines. The limitation of TempLM is that it relies on the specification of deadlines for the goal propositions of the problem and it requires an upper time bound for the plan, which is automatically derived from the problem deadlines. This way, when there are no deadlines in the problem, a sufficiently large estimated value must be provided as upper bound of the plan. TempLM shows an excellent performance in overconstrained problems with tight deadlines, which would clearly degrade with larger plan upper bounds.Nonetheless, the key contribution of TempLM is its internal representation of temporal landmarks, which can be easily used to handle all the state trajectory constraints defined in PDDL3.0 <cit.>. In this paper, we show that the interval representation of temporal landmarks of TempLM, along with the constraints defined between landmarks and their intervals, constitute a very suitable framework for representing and reasoning with trajectory constraints. The next section summarizes the main features of TempLM, highlighting the representation of a landmark and the propagation of temporal constraints. The following section presents the 10 modal operators that specify the state trajectory constraints in PDDL3.0; for each operator, we show the corresponding landmark representation and the set of constraints that would be needed in TempLM in order to account for the constraint. Subsequently, we present an illustrative example that shows the powerful inference engine of TempLM when handling trajectory constraints. In the last section, we discuss the advantages and limitations of TempLM and we stress the utilization of temporal landmarks for representing other constraints in temporal planning. § OVERVIEW OF TEMPLMTempLM is a framework specifically aimed at solving temporal planning problems with deadline constraints <cit.>. It assumes a subset of the semantics of the temporal model of PDDL2.1 <cit.>, the Time-Initial Literals (TILs) defined in PDDL2.2 <cit.> as well as theconstraint introduced in PDDL3.0 <cit.>.A temporal planning problem with deadline constraints is a tuple 𝒫=<P,O,I,G,D>, where P is a set of propositions, I is the initial state, O is a set of durative actions in PDDL2.1, I is the initial state, G is a goal description and D is a set of deadline constraints of the form (p,t), denoting that proposition p must be achieved within t time units. A durative action a ∈ O in PDDL2.1 (<cit.>) is defined as a tuple ⟨ dur(a),Cond(a), Eff(a) ⟩ where dur(a) ∈ℛ^+ is the duration of the action; Cond(a)=SCond(a) ∪ ECond(a) ∪ Inv(a) (conditions to hold at the start, at the end or overall the duration of a); Eff(a)=SEff(a) ∪ EEff(a) (effects produced at the start or end of the execution of a).A temporal plan Π is a set of pairs (a,t), where a ∈ O and t is the start execution time of a. For a given proposition p, start(p) and end(p) denote the time points when p is asserted and deleted, respectively, by any action a in Π. The duration (makespan) of a temporal plan Π is dur(Π)=max_∀ (a, t) ∈Π(t+dur(a)). Additionally, the upper bound of the temporal plan Π is set as T_Π= max_t(p,t) , ∀ (p,t) ∈ D. TempLM extracts first the non-temporal landmarks of a problem 𝒫 <cit.> and then discovers a new set of (temporal) landmarks through the deadline constraints in D. A temporal landmark of a problem 𝒫 is a proposition of P that must hold in every plan that solves 𝒫 in order to satisfy D. In this paper, we focus exclusively on describing the internal representation of temporal landmarks in TempLM. We refer the reader to the works in <cit.> for details of the landmark extraction process.§.§ Temporal Landmarks We introduce a running example on the classicaldomain (Figure <ref>) in order to show the relevant information of the temporal landmarks. The scenario consists of a depot , where the palletand the truckare located; a crateis in pallet .There is also a distributorwhich contains createin pallet , and a distributorthat contains the palletsand . Finally, the crateis in palletlocated in distributor . The time a truck takes to travel between depots or distributors is shown on the edges. For simplicity, the hoists of the original domain have been eliminated: crates can be loaded into the truck as long as they are clear (have nothing on top) and can be unloaded from the truck to be put on top of another crate or onto a pallet.Landmarks are characterized by their temporal intervals and relationships between them. Landmarks form a Temporal Landmarks Graph (TLG), a directed graph G=(V,E) where V are landmarks and E is a set of ordering constraints of the form l_i ≺_n l_j or l_i ≺_d l_j that denote a necessary or dependency ordering, respectively, meaning that l_imust happen before landmark l_j in every solution plan. Let's assume the goal of the problem is , which will be specified as. Then,is a temporal landmark and likewise all the propositions of the initial state and goal state. In this situation, a dependency ordering ≺_d is established, which denotes that even though there are two possible ways of reaching the goal - through distributoror -must always be satisfied beforein any solution plan. However, if the goal werethen the TLG would contain the necessary ordering ≺_n because in this case it is mandatory that trucktravels throughto reach the goal in time. Another interesting aspect is that since the surface whichmust be stacked on is not known, the propositionsandare not landmarks.Landmarks are also annotated with various temporal intervals that represent the validity of the corresponding temporal proposition <cit.>. Three types of intervals are identified:* The generation interval of a landmark l is denoted by [min_g(l), max_g(l)]. min_g(l) represents the earliest time point when landmark l can start in the plan. This value is determined by the time of the first proposition layer when l appears in a Temporal Relaxed Planning Graph (TRPG). max_g(l) represents the latest time point when l must start in order to satisfy the deadlines D of a problem 𝒫 and it is initialized as max_g(l)=T_Π.* The validity interval of a landmark l is denoted by [min_v(l), max_v(l)] and it represents the longest time that l can hold in the plan. Initially, this interval is set as min_v(l)=min_g(l) and max_v(l)=T_Π.* The necessity interval of a landmark l is denoted by [min_n(l), max_n(l)] and it represents the set of time points when l is required as a condition for an action to achieve other landmarks. Initially, min_n(l)=min_g(l) and max_n(l)=T_Π. Let us assume theandactions in the example of Figure <ref> have a duration of two time units each; and that the problem goal is g=, being this the only deadline constraint of the problem. Figure <ref> shows the initial TLG for this goal. Thus, T_Π=25 and the generation interval of g is:* max_g(g)=25 because the latest time at which g must be generated in order to satisfy the deadline is 25* min_g(g)=22 because the first appearance ofin the TRPG is at level 22: 20 (shortest route) + 2 () (note that the first TRPG layer that contains the effects ofis at level 2). For landmark l_1= we have that min_g(l_1)=0 and max_g(l_1)=25; for landmark l_2=, min_g(l_2)=10 and max_g(l_2)=25; and for l_3=, min_g(l_3)=20 and max_g(l_3)=25. Likewise, the validity intervals would initially take on the same values as for the generation intervals.§.§ Propagation of temporal constraints Once the intervals of the temporal landmarks are initialized in the TLG, constraints are propagated and the landmark intervals are updated accordingly. Causal relationships. The ordering constraints l_i ≺_n l_j or l_i ≺_d l_j represent causal relationships, where l_i ∈ Cond(a) and l_j ∈ Eff(a') for two actions a, a' ∈ O. If a=a' then it is a direct causal relationship represented by l_i ≺_n l_j. In any other case, l_i ≺_d l_j represents an indirect causal relationship that involves more than one action. The necessary and dependency orderings are transitively propagated across the TLG creating further constraints. Particularly, for two landmarks involved in a causal relationship,a separating temporal distance between the time point when l_i is required and the time point when l_j is needed is defined according to the duration of the action(s) involved in the causal transition. Restricting our attention to the simple case when a=a', we have that: (a) dist(l_i,l_j)=dur(a) if l_i ∈ SCond(a) and l_j ∈ EEff(a); (b) dist(l_i,l_j)=ϵ if l_i ∈ ECond(a) and l_j ∈ EAdd(a); and (c) dist(l_i,l_j)=-dur(a) if l_i ∈ ECond(a) and l_j ∈ SAdd(a)[The definition of dist(l_i,l_j) is also dependent on the first or the last time when l_i is required. For the sake of simplicity, we define a general concept of distance. More details on this can be found in <cit.>.].In the initial TLG for the goal g= shown in Figure <ref>, we can observe that nodes are landmarks labeled with the three temporal intervals and edges are labeled with a necessary or dependency ordering constraint plus a temporal distance. For example, ≺_n(2) means it is necessary to have the crateinto the truckat least 2 time units (duration of the action ) before having the crateat . In this case,is a SCond andis an EEff of the sameaction, respectively. Figure <ref> does not picture any dependency ordering because the deadline to haveatis 25, which compelsto reachvia . However, the TLG for the goalshown in Figure <ref> pictures a dependency ordering ≺_d(20) . The distance 20 denotes thatmust be in20 times units before reaching , which is the minimal distance to reachfrom . Given that the deadline for the goal is 40 in this case,can reachthroughorbut this information is not known yet. This is the reason of the dependency ordering in Figure <ref>, which means that at least twoactions are involved in this causal relationship. We apply an interval constraint propagation that restricts the domain of the temporal intervals accordingly to the type of interval and the distance of the causal relationship. The min endpoints of the intervals are propagated forward in time and the max endpoints are propagated backward along time. A causal relationship of the form l_i≺_{n,d} l_j between two landmarks l_i and l_j (l_i is required to generate l_j) implicitly defines the following interval constraints: min_v(l_j) = max(min_v(l_j), min_v(l_i) + dist(l_i, l_j)) max_g(l_i) = min(max_g(l_i), max_g(l_j) - dist(l_i, l_j)) Thus, the min_v of l_j is subject to the minimum validity of l_i plus the duration of the action(s) that separates both landmarks. Likewise, the latest time when l_i must start in the plan depends on the latest time when l_j is required minus the temporal distance determined by the duration of the action(s) that are needed to generate l_j from l_i. Back to the example of Figure <ref> with goal g=, TempLM will update the generation intervals of the landmarks as shown in Table <ref>. Note that the order of the max_g propagation goes backwards from(the destination depot) throughto finally reach . The final TLG after propagation is shown in Figure <ref>.Mutex relationships. Given l_i≺_{d,n} l_j, if l_i and l_j are mutex <cit.> then l_i and l_j cannot overlap in any way. The propagation of the mutex relationships updates max_v(l_i) to ensure l_i does not overlap with l_j. Thus, max_v(l_i) is updated to the minimum value among the current validity endpoints of l_i and the latest time when l_j must start in the plan minus the temporal distance between both landmarks landmarks. Particularly: max_v(l_i) = min(max_v(l_i), min_v(l_j),max_g(l_j) - dist(l_i, l_j)) max_g(l_i) = min(max_v(l_i), max_g(l_i)) §.§ Search TempLM searches in the space of partial plans. Nodes are represented by a pair (Π,S_t), where Π is a conflict-free partial plan and S_t is the state reached at time t=dur(Π) after executing Π in I. Nodes are expanded by finding the earliest start time of the set of applicable actions in S_t. Each node is associated to a TLG. A newly inserted action may cause the appearance of new temporal landmarks in the TLG of a node and the propagation of the temporal constraints will update the landmarks intervals. As a result of the propagation, if for a given deadline (p,t) it turns out that max_g(p)>t or some inconsistency is found in the endpoints of the landmarks intervals, the node is pruned. § STATE TRAJECTORY CONSTRAINTS IN PDDL3.0 PDDL3.0 is the language used at the Fifth International Planning Competition (IPC-2006)[http://www.icaps-conference.org/index.php/Main/Competitions]. This extended language introduces new expressive functionalities such as strong and soft constraints on plan trajectories and soft problem goals or preferences <cit.>. In this work, we will exclusively focus on the strong state trajectory constraints and we will analyze how these are represented, interpreted and handled when using temporal landmarks.State trajectory constraints are used to express conditions that must be met by the entire sequence of states visited during the execution of a plan. They are expressed through temporal modal operators over first order formulae involving state predicates. Actually, all the constraints expressed with the temporal modal operators of PDDL3.0 specify a temporal interval at which the state predicate must hold along the sequence of states of the plan execution. In the following, we analyze the semantics of the ten modalities of constraints introduced in PDDL3.0 as well as how they would be encoded in a framework based on temporal landmarks. §.§ The operatorThe syntax of this constraint is , whereis a goal descriptor (a first-order logic formula). It is used to identify conditions that must hold in the final state when the plan has been executed, making them equivalent to traditional goal conditions. Whenever a goal condition with no temporal modal operator is specified in a problem file, it is assumed to be ancondition, thus preserving the standard meaning for existing goal specifications. The semantics of this operator is shown in formula <ref>. <(S_0, 0),(S_1, t_1), …, (S_n, t_n)>( ϕ) iffS_n ϕFor example,indicates thatmust be inat the goal state. The operatordefines an interval [t_i,t_j] for the occurrence of the goal condition ϕ such that 0 ≤ i ≤ n and j=n. Note that the fulfillment of ϕ throughout [t_i,t_j] does not necessarily imply that there must be a single occurrence of ϕ. Particularly, the expressionrefers to the last appearance of ϕ in the plan so that the constraint will be met for such occurrence of ϕ as long as j=n. Taking into account the above considerations, a constraint of the formallow us to make the following implications regarding the information of temporal landmarks:* ϕ is a temporal landmark since it is a mandatory condition to be satisfied in a particular time interval* given that ϕ is needed , max_n(ϕ) = t_n, which implies that max_v(ϕ) = t_n because max_n(ϕ) ≤ max_v(ϕ)* the non-compliance of the constraint in the TLG of a node cannot be used to prune partial plans during search. Only when the plan is complete, the non-compliance of this constraint will be used to discard a plan as a valid solution.§.§ The operatorThe semantics of a constraintis shown in formula <ref>, which expresses that the goal conditionmust hold in every state in order for the modal formula to hold over the trajectory. <(S_0, 0),(S_1, t_1), …, (S_n, t_n)>( ϕ) iff ∀ i:0 ≤ i ≤ n·S_i ϕA constraintexpresses that the goal condition ϕ must be true throughout the plan. For instance, if the problem requires to have a allet1 clear all the time, we will use the constraint . Thereby, the expressiondefines a temporal interval [t_i,t_j] for ϕ such that i=0 and j=n. In this case, it is clear that there must be a single occurrence of ϕ that holds over [t_i,t_j]. The temporal landmark information that can be inferred through a constraintis the following:* ϕ is a temporal landmark as the constraint denotes a proposition that must be true in every solution plan over the interval [0,t_n]* ϕ is needed throughout the interval [0,t_n] so min_n(ϕ) = 0 and max_n(ϕ) = t_n , which in turn implies that min_v(ϕ) = 0 and max_v(ϕ) = t_n because [min_n(ϕ), max_n(ϕ)] ∈ [min_v(ϕ), max_v(ϕ)]* it allows pruning a search node when adding a new action in its TLG entails a modification of the necessity or validity interval of ϕ.For example, if the TLG of a node containsand an action α = is added to the node, then the node will be pruned because the value of max_v() is modified since α deletes . Additionally, any partial plan of the tree which does not contain ϕ will be also pruned. §.§ The operatorThe syntax of this operator isand the semantics is expressed in the formula <ref>. <(S_0, 0),(S_1, t_1), …, (S_n, t_n)>( ϕ) iff ∀ i:0 ≤ i ≤ n· ifS_i ϕ then ∃ j: j ≥ i ·∀ k: i ≤ k ≤ j ·S_k ϕ and ∀ k: k > j · S_k ϕ A constraintobviously denotes that ϕ must occurin the plan, if any. That is, this constraint does not impose a mandatory occurrence of ϕ but if it happens then only a single occurrence of ϕ must appear in the plan. Consequently, the single occurrence of ϕ will be valid over an interval [t_i,t_j], where 0 ≤ i ≤ n and j ≥ i.Regarding a temporal landmark representation, a constraint of the formleads to the following derivations:* ϕ cannot be labeled as a temporal landmark since a mandatory occurrence is not imposed* if ϕ is a landmark then we know that max_g(ϕ) ≤ t_n, which indicates that ϕ must be obtained before completion of the plan* it prevents having more than one occurrence of ϕ so any node that violates this condition will be pruned. This has some implications when solving conflicts that involve adding a new occurrence of ϕ. For example, let's assume that max_v(ϕ)=d and max_n(ϕ)=d'such that d' > d. In this case, a new occurrence of ϕ is needed to satisfy the necessity interval. This conflict is solvable in TempLM by introducing another instance of ϕ (ϕ') as long as ϕ' is consistent with the intervals of the rest of the landmarks in the TLG of the node <cit.>. However, such a conflict would be unsolvable if a constraintexists in the planning problem specification. §.§ The operatorThe semantics of a constraintis presented in formula <ref>. As the name and semantics express, a constraintindicates that ϕ must occur at least once in the plan. Every single occurrence of ϕ must hold over an interval [t_i,t_j], where 0 ≤ i ≤ n and j ≥ i. <(S_0, 0),(S_1, t_1), …, (S_n, t_n)>( ϕ) iff ∃ i:0 ≤ i ≤ n· S_i ϕ A constraint of the formallows us to derive the following information related to temporal landmarks:* ϕ is a temporal landmark as it must necessarily occur in the plan at least once* it must hold max_g(ϕ) ≤ t_n to ensure that ϕ occurs at least in the last state of the plan trajectory* similarly to theconstraint, the non-compliance of this constraint in the TLG of a node cannot be used to prune nodes during search. Once the plan construction is finished, we will be able to discard it as a valid solution in case ϕ never holds in the plan.§.§ The operatorThe operatoris used to express deadlines. The syntax of this operator is , whereis any numeric literal (in STRIPS domains it will be restricted to integer values) andhas the same meaning as in all the previous operators. The semantics associated to this operator is shown in formula <ref>.<(S_0, 0),(S_1, t_1), …, (S_n, t_n)>(tϕ) iff ∃ i:0 ≤ i ≤ n ·S_i ϕ∧ t_i ≤ t For example,specifies that truckmust be in depotby time 10 at the latest. The semantics of the operatordoes not state the specific occurrence of the goal to which the constraint is applied in case thatis achieved more than once in the plan. More specifically, the definition states that, if a goal is achieved more than once in the plan, it suffices one appearance ofto fulfill theconstraint. On the other hand, there is no indication in the semantics that the goal condition must persist until the goal state; that is, the above constraint is satisfied as long asis met in the plan irrespective of the final location of truck . The information of temporal landmarks that can be derived from a constraint(for a particular occurrence of ϕ) is:* ϕ is a temporal landmark as it must necessarily occur in the plan at least once* it must always be true that max_g(ϕ) ≤ t* given a partial plan (Π,S_t') such that t' ≥ t, the node will be pruned if ϕ does not hold in Π §.§ Operators ,andThese three operators share a similar syntax and semantics as they all involve two goal conditions in the constraint. The syntax is as follows: ,and . The constraints only differ in the temporal interval specified for the occurrence of the second goal condition. The semantics of the three operators are shown in formulas <ref>, <ref> and <ref>. <(S_0, 0),(S_1, t_1), …, (S_n, t_n)>(tϕ ψ) iff ∀ i:0 ≤ i ≤ nifS_i ϕ then ∃ j: i ≤ j ≤ n · S_j ψ and t_j - t_i ≤ t<(S_0, 0),(S_1, t_1), …, (S_n, t_n)>( ϕ ψ) iff ∀ i · 0 ≤ i ≤ nifS_i ϕ then ∃ j: i ≤ j ≤ n · S_j ψ<(S_0, 0),(S_1, t_1), …, (S_n, t_n)>( ϕ ψ) iff ∀ i · 0 ≤ i ≤ nifS_i ϕ then ∃ j: 0 ≤ j < i · S_j ψ The semantics of the three operators express the following characteristics:* the constraints are not restricted to a single occurrence of ϕ and ψ* the constraints apply if and only if ϕ occurs in the plan* the constraints imply that for every occurrence of ϕ there must exist at least one occurrence of ψ that satisfies the corresponding temporal requirement* it is not mandatory that every occurrence of ψ meets the constraint as long as there exists at least one occurrence of ψ that does meet the constraint for every ϕ Specifically, a constraint of the formindicates that ψ must hold within t time units from the occurrence of ϕ. A constraintis met if ψ holds before ϕ and a constraintis satisfied if ψ holds after ϕ. Regarding the information of temporal landmarks, we can infer the following derivations:* for the three operators: if ϕ is a temporal landmark, then ψ is a temporal landmark too as it must necessarily occur in the plan at least once* for the operator : it must hold that ∀ϕ∃ψ : max_g(ψ) ≤ max_g(ϕ) + t. Thus, assuming that ϕ_1 is the first occurrence of ϕ, for the remainder occurrences ϕ_i: i > 1, if max_g(ϕ_i) ≤ max_g(ϕ_1) + t then the same occurrence of ψ will satisfy all ϕ_i; otherwise, for occurrences ϕ_j: j > 1 such that max_g(ϕ_j) > max_g(ϕ_1) + t a different occurrence of ψ, say ψ', will be needed to satisfy the constraint of ϕ_j.* for the operator , it must hold that ∀ϕ∃ψ : max_v(ψ) ≥ max_g(ϕ)* for the operator , it must hold that ∀ϕ∃ψ: max_g(ψ) ≤ max_g(ϕ)* the existence of a constraintallows discarding a node (Π,S_t'), t' > t, if Π contains ϕ but not ψ* a constraintwill allow to immediately prune a node which contains ϕ but not ψ* a constraintcan only be used to prune nodes that contain finished plans in which ϕ holds and ψ does not. §.§ The operatorThe semantics of a constraintis expressed in formula <ref>, indicating that ϕ must hold during the interval [u_1,u_2). More particularly, formula <ref> explains three cases: when [u_1,u_2) falls entirely within the plan trajectory (first case); when [u_1,u_2) falls partially within the plan trajectory (second case); and when [u_1,u_2] falls outside the plan trajectory (third case). <(S_0, 0),(S_1, t_1), …, (S_n, t_n)>(u_1 u_2ϕ) iff if t_n > u_1 then∀ i · 0 ≤ i ≤ n· if u_1 ≤ t_i < u_2 then S_i ϕ, ∀ j · 0 ≤ j < n· if t_j ≤ u_1 < t_j+1then S_j ϕ ift_n ≤ u_1 then S_n ϕ A constraint of the formallows us to derive the following information related to temporal landmarks:* ϕ is a temporal landmark since it must necessarily occur in the plan at least once* for the first case, given that it is mandatory for ϕ to hold between u_1 and u_2, we have that min_n(ϕ) ≤ u_1 and u_2 < max_n(ϕ); that is,determines that ϕ is needed at least between [u_1,u_2)* for the second case, given that it is mandatory for ϕ to hold between u_1 and t_n, we have that min_n(ϕ) ≤ u_1 and max_n(ϕ)=t_n; that is,determines that ϕ is needed at least between [u_1,t_n)* for the third case, given that it is mandatory for ϕ to hold at t_n, we have that min_n(ϕ)=max_n(ϕ)=t_n; that is,determines that ϕ is needed at t_n* the search process will prune any node in which some restriction modifies the necessity interval [u_1,u_2) of ϕ (for the first case) or modifies the necessity interval [u_1,t_n) of ϕ (for the second case), or it will discard finished plans that do not contain ϕ (for the last case) §.§ The operatorThe semantics ofimposes that the goal condition ϕ must hold in a state after t time units have elapsed from the initial state at time 0 (see formula <ref>). Note that the semantics does not say that ϕ must exclusively hold after time t so it could be the case that ϕ also holds before t. On the other hand, if t is a time later than the finish time of the plan at t_n then ϕ must just hold in the last state (second if in formula <ref>). <(S_0, 0),(S_1, t_1), …, (S_n, t_n)>(tϕ) iff if t_n > t then ∃ i: 0 ≤ i ≤ n· S_i ϕ andt_i > t ift_n ≤ t then S_n ϕ A constraint of the formallows us to derive the following information related to temporal landmarks:* ϕ is a temporal landmark as it must necessarily occur in the plan at least once* the constraint max_v(ϕ) ≥ t must be met* this constraint can only be used to discard finished plans that do not contain ϕTables <ref> and <ref> summarize the landmarks derived from the PDDL3.0 modal operators and the updates applied on the endpoints of the landmarks intervals, respectively.§ APPLICATION EXAMPLESIn this section we present some practical examples that show the behaviour of TempLM when handling several PDDL3.0 state trajectory constraints on the scenario introduced in Figure <ref>. The actions of this domain are:Theaction allows ato move between two locationsand , which are a depot or a distributor. The truck can move without carrying any crates. The duration of this action is given by the time to drive between the two locations.Theaction is used to load a crate , which is onto the surfaceof the location , into the . As a side effect, the surface where the crate was found is cleared.Theaction puts a cratewhich is into aonto a surfacelocated at the same placethan the .We will now show the temporal information that can be extracted when applying several state trajectory constraints on this problem. Example with aconstraint. Let's assume the problem goal is . In this case, TempLM finds that min_g()=22 (see the calculation of this value in page 2) and max_g()=20 so the TLG will not be generated and TempLM will return 'unsolvable problem'.Let's now suppose that the problem goals areand . The initial TLG before propagation for this problem is shown in Figure <ref>[Only the most relevant landmarks are displayed]. As explained in section Overview of TempLM,is a landmark because it is the only way to satisfy . On the other hand,is a landmark too becausemust go by distributorto load crate . Then, the min endpoint of the validity interval ofis updated to: min_v()=max(min_v(),min_v()+10, min_v()+15)=30. After propagating this interval modification, we have that min_v()=32 , which obviously entails an interval inconsistency because max_g()=25 < min_v()=32. This is an indication that the only way of achievingat time 25 is traveling through distributor . In turn, this means thatis ordered after(i.e.,is transported after ), thus causing another inconsistency in the landmarkbecausewill not be delivered in time. This is also an indication that the problem is unsolvable and TempLM will detect this situation before even starting the search process.Example with anconstraint. In this case, we will assume that the only goal is . Figure <ref> shows the initial TLG for this goal[The landmarks of the initial state are not shown for the sake of simplicity]. In the figure, we can observe a dependency ordering betweenand , representing thatwill reachafter(in this case, since the deadline is at 40 we don't know yet whether the route ofto reachmust go throughor ). Another observation is that the pallet on whichwill be stacked is unknown, reason whyandare not landmarks.If we define ,becomes a landmark with validity interval [0,40]. During the search process, two actions that achieve the effectare found:and . Given that the application ofmodifies the value of max_v() when crateis unloaded in , and that anconstraints compelsmax_v()=t_n throughout the plan, the only viable option is to use the actionand TempLM would discard the node that unloadsin . Example with anconstraint. Following with a problem that contains the single goal , let's suppose that we add . Since the deadline for the problem goalis not very tight (max_g()=40), the new constraint does not affect the max endpoint of the generation interval of the goal. However, new information could be inferred during the search process. For instance, ifis achieved at t=5 then max_g()=27, which would allow us to infer thatmust be now a landmark.Example with aconstraint. Assuming we have the same goal as above , let's suppose the truckmust go through some maintenance repair in depotbefore starting the delivery. We define the restrictionto denote thatmust stay atfor 10 time units for the maintenance work. This restriction does not alter the initial necessity interval of , which is [0,18] as can be seen in Figure <ref> (18 is the latest time thatcan stay inin order to achieve the goal at 40). Nodes that comprise partial plans in whichis not inup to time 10 will be eliminated during the search process; that is, nodes that include a drive actionbetween 0 and 10.Example with anconstraint.Assuming we have the same goal as above, , this examples shows a situation in which besides satisfying the goal, the truckmust end the transportation at distributor . This implies defining also the constraint , which makesbecome a landmark with validity interval [0,t_n], max_n =t_n and introduces the ordering ≺_d. This ordering is motivated becauseandare mutex andmust happen at the end due to the constraint.During the plan construction, given that the goal deadline is at time 40 and hence truckcan reach distributoreither traversingor , the search tree will comprise two branches that follow these two alternatives. Let's analyze the impact of constraintin the second branch, the one that traverses . In this case,must go throughto reach , which implies ≺_n and max_v< t_n. Then, a conflict arises because the TLG contains the landmarkwith max_n =t_n. As explained in the section of themodal operator, when a landmark is found to be needed beyond its maximum validity, TempLM solves this conflict by introducing a new occurrence of the landmark , and this new occurrence is the one that will be ordered before . Thus, the final TLG will contain ≺_n ≺_d.Example with anconstraint. In this example, the goal is to switch the location of two packages between distributors:and . Figure <ref> shows the initial TLG for this problem. We can observe there is an inconsistency between the validity intervals ofandbecause these two landmarks are mutex and cannot co-exist (the truckcannot be simultaneously in distributorand distributor ). TempLM is not able to decide the order of these two landmarks with the current deadlines, being thus possible ≺_d or ≺_d.Let's suppose the problem includes now the constraintand that a node n=(Π,S_t) which contains the ordering ≺_din Π is found during the search process. Π embodies a plan wheredrops first byto load crate , which in turn implies thatwill need to get back toto unload , thus violating the constraint . Therefore, the node n will be discarded. In this example, the only feasible solution is a plan that contains ≺_d, meaning that by the timereaches distributorto load crate , the truck already contains the crateto be unloaded in . § DISCUSSION: BEYOND PDDL3.0 The exposition presented in the two previous sections reveal that the temporal landmarks formalism of TempLM is a very appropriate mechanism to deal with state trajectory constraints. It is certainly true that the functioning of TempLM is conditioned to the upper time bound of the plan T_Π, which can be set as the maximum value of all the deadlines constraints defined in the problem or as any particular value, and that the less restrictive T_Π is, the less information will be extracted from the trajectory constraints. Nevertheless, considering that constraints , ,andare only applicable over finished plans, and that constraints ,andare easily checkable in any partial plan regardless the deadlines of the problem, we can conclude that the constraints that mostly affect the behaviour and performance of a temporal planner are ,and , which all define a deadline constraint. Interestingly, adapting makespan-minimization heuristics to account for state trajectory constraints is still a challenging and unexplored line of investigation.Besides the potential of temporal landmarks to handle trajectory constraints, we envision some further functionalities. For instance, one is not allowed to express in PDDL3.0 that a proposition ψ must hold within t time units from the end of another proposition ϕ. The specification of state trajectory constraints that involve two propositions ϕ and ψ is always related to the occurrence time of the first proposition ϕ, irrespective of ϕ is true or not when ψ holds. Handling a constraint of the form "ψ mustt time units from the end of ϕ" will be easily encoded with the temporal constraint max_g(ψ) ≤ max_v(ϕ) + t.Another interesting issue is to be able to specify persistence of facts. Persistence would be expressed with , which requires nesting of the modalities and this is not allowed in standard PDDL3.0 syntax[Personal communication with Derek Long]. If PDDL3.0 were extended to include, for example, a modal operator like , this would be easily encoded in TempLM as max_n(ϕ) ≥ max_g(ϕ) + t.Last but not least, TempLM can also be adapted to the particular features of any temporal model; e.g., Allen's interval algebra <cit.>. Intervals of the algebra would be represented by means of the landmarks intervals and the 13 base relations would be captured by setting the appropriate temporal constraints between the max_g and max_v of the temporal landmarks. For instance,would be encoded as max_v(ϕ) ≥ max_g(ψ); andas max_g(ϕ) ≥ max_g(ψ) and max_v(ϕ) ≤ max_v(ψ).A practical application of state trajectory constraints is the delivery of perishable goods such as fish or seafood.Companies must not only meet the delivery deadlines but also consider the best transport means for each product. Hence, depending on the type of product (fresh, frozen or long-term preserving fish products) and the temperature of the refrigerated transport (ice-cooled or machine-cooled wagons), the amount of time goods are exposed to particular temperatures must not exceed a time limit so as to ensure freshness, nutritional value and food preservation of the fishing goods.All in all, we can conclude that the temporal formalism of TempLM offers a great flexibility to express any kind of temporal constraints in temporal planning problems.§ ACKNOWLEDGEMENTS This work has been partly supported by the Spanish MINECO under project TIN2014-55637-C2-2-R and the Valenciam project PROMETEO II/2013/019.aaai
http://arxiv.org/abs/1706.08317v1
{ "authors": [ "Eliseo Marzal", "Mohannad Babli", "Eva Onaindia", "Laura Sebastia" ], "categories": [ "cs.AI" ], "primary_category": "cs.AI", "published": "20170626105657", "title": "Handling PDDL3.0 State Trajectory Constraints with Temporal Landmarks" }
]Samir Choudhuri cor1 [email protected] Department of Physics,& Centre for Theoretical Studies, IIT Kharagpur,Pin: 721302, IndiaNational Centre For Radio Astrophysics, Post Bag 3, Ganeshkhind, Pune 411007, India ]Nirupam RoyDepartment of Physics, Indian Institute of Science, Bangalore 560012, India ]Somnath BharadwajDepartment of Physics and Meteorology & Centre for Theoretical Studies, IIT Kharagpur, 721302India]Sk. Saiyad AliDepartment of Physics,Jadavpur University, Kolkata 700032, India]Abhik GhoshDept of Physics and Astronomy, University of the Western Cape, Robert Sobukwe Road, Bellville 7535, South AfricaSKA SA, The Park, Park Road, Pinelands 7405, South Africa]Prasun DuttaDepartment of Physics, IIT (BHU), Varanasi 221005, India The “Tapered Gridded Estimator” (TGE) is a novel way to directly estimate theangular power spectrum from radio-interferometric visibility data that reducesthe computation by efficiently gridding the data, consistently removes thenoise bias, and suppresses the foreground contamination to a large extent bytapering the primary beam response through an appropriate convolution in thevisibility domain. Here we demonstrate the effectiveness of TGE in recoveringthe diffuse emission power spectrum through numerical simulations. We presentdetails of the simulation used to generate low frequency visibility data forsky model with extragalactic compact radio sources and diffuse Galactic synchrotron emission. We then use different imaging strategies to identify the most effective option of point source subtraction and to study the underlying diffuse emission. Finally, weapply TGE to the residual data to measure the angular power spectrum, andassess the impact of incomplete point source subtraction in recovering theinput power spectrum C_ℓ of the synchrotron emission. This estimator isfound to successfully recovers the C_ℓ of input model from the residualvisibility data. These results are relevant for measuring the diffuse emission likethe Galactic synchrotron emission. It is also an important step towardscharacterizing and removing both diffuse and compact foreground emission inorder to detect the redshifted 21cm signal from the Epoch ofReionization. methods: statistical; methods: data analysis; techniques: interferometric; (cosmology:) diffuse radiation§ INTRODUCTION A detailed investigation and analysis of the Galactic diffuse synchrotronemission power spectrum can be used to study the distribution of cosmic rayelectrons and the magnetic fields in the interstellar medium (ISM) of the Milky Way, and is veryinteresting in its own right <cit.>. On theother hand, at a very different scale, observations of redshifted 21 cm radiation from neutral hydrogen (HI) hold the potential of tracing thelarge scale structure of the Universe over a large redshift range of 200 ≥ z ≥0. Accurate cosmological HI tomography and power spectrum measurement,particularly from the Epoch of Reionization (EoR), by ongoing or futurelow-frequency experiments will provide us a significant amount of informationabout various astrophysical and cosmological phenomena to enhance our presentunderstanding of the Universe. Interestingly, since one of the main challengesin statistical detection of the redshifted 21cm signal arises fromthe contamination by Galactic and extragalactic “foregrounds” <cit.>, these two aspects are also quite related. The two majorforeground components for cosmological HI studies are (1) the bright compact(“point”) sources and (2) the diffuse Galactic synchrotron emission<cit.>. Detection of the weakcosmological HI signal will require a proper characterization and removal of pointsources as well as this diffuse foregrounds. Naturally, a significant amount of effort has gone into addressing the problemof foreground removal for detecting the 21cm power spectrum from EoR <cit.>. In contrast, foregroundavoidance <cit.>is an alternative approach based on the idea that contamination from anyforeground with smooth spectral behaviour is confined only to a wedge incylindrical (k_⊥, k_∥) space due to chromatic coupling of aninterferometer with the foregrounds. The HI power spectrum can be estimatedfrom the uncontaminated modes outside the wedge region termed as the EoR window where the HI signal is dominant over the foregrounds. With their meritsand demerits, these two approaches are considered complementary <cit.>. Here we have considered the issue of estimating the angular power spectrumdirectly form the radio-interferometric “visibility” data. In this endeavor,we have developed a novel and fast estimator of angular power spectrum thatconsistently avoids the noise bias, and tested it with simulated diffuseGalactic synchrotron emission <cit.>. Here, we have further developedthe simulations to include the point sources in the sky model (as well asinstrumental noise) to investigate the effectiveness of the estimator of recovering the diffuseemission power spectrum in presence of the point sources. This paper describesthe details of the simulations and analysis, including the adopted point sourcemodeling and subtraction strategies, and their effects on the residual diffuseemission. We demonstrate that, by using this newly developed Tapered GriddedEstimator (hereafter TGE), we can avoid some of the complications of wide fieldlow frequency imaging by suitably tapering the primary beam during powerspectrum estimation. A companion paper has reported the usefulness of the newestimator in recovering the diffuse emission power spectrum from the residualdata in such situation <cit.>. A further generalization of theestimator to deal with spherical and cylindrical power spectrum is presented in<cit.>. Please note that this is part of a coherent effort ofend-to-end simulation of realistic EoR signal and foreground components, andfinally using suitable power spectrum estimator to recover the signal. However,even though these exercises are in the context of EoR experiments, for the sakeof simplicity, we have so far not included the weak cosmological signal in themodel. Here we establish the ability of the developed estimator to recover thediffuse emission power spectrum accurately after point source subtraction.Thus, apart from EoR experiments, these results are also relevant in moregeneral situation, e.g. detailed study of Galactic synchrotron emission<cit.>.The current paper is organized as follows. In Section 2, we discuss the detailsof the point source and diffuse emission simulation. Section 3 and 4 presentthe analysis using different CLEANing options for point source subtraction andthe results of power spectrum estimation. Finally, we present summary andconclusions in section 5. § MULTI-FREQUENCY FOREGROUND SIMULATIONIn this section we describe the details of the foreground simulation to produce the sky model for generating visibilities for low radio frequency observationwith an interferometer. Even if the simulation, described in this paper, iscarried out specifically for 150 MHz observation with the GiantMetrewave Radio Telescope (GMRT), it is generic and can easily be extended toother frequency and other similar telescopes including the Square KilometreArray (SKA). Earlier studies <cit.> have found that, for 150MHzGMRT small field observations, the bright compact sources are the dominatingforeground component for EoR signal at the angular scales ≤4^∘, theother major component being the Galactic diffuse synchrotron emission<cit.>. We build our foreground sky modelkeeping close to the existing observational findings. The sky model includesthe main two foreground components (i) discrete radio point sources and (ii)diffuse Galactic synchrotron emissions. The contributions from these twoforegrounds dominate in low frequency radio observations and their strength is∼ 4-5 orders of magnitude larger than the ∼ 20-30mKcosmological 21-cm signal <cit.>. Galactic and extragalacticfree-free diffuse emissions are also not included in the model, though each ofthese is individually larger than the HI signal. §.§ Radio Point SourcesMost of the earlier exercise of numerical simulation conducted so far have notincluded the bright point source foreground component in the multi-frequencymodel. In such analysis, it is generally assumed that the brightest pointsources are perfectly subtracted from the data before the main analysis, andthe simulated data contains only faint point sources and other diffuseforeground components, HI signal and noise. We, however, simulate the pointsource distribution for sky model using the following differential sourcecounts obtained from the GMRT 150 MHz observation <cit.>: dN/dS = 10^3.75/ Jy.Sr (S/ Jy)^-1.6 . The full width half maxima (FWHM) of the GMRT primary beam (PB) at 150MHz is ≈3.1^∘. To understand and quantify how the bright pointsources outside the FWHM of the PB affect our results, we consider here alarger region (7^∘× 7^∘) for point source simulation.Initially, 2215 simulated point sources, with flux density in the range9mJy to 1Jy following the above mentioned source count, arerandomly distributed over this larger region. Out of those sources, 353 arewithin 95^' from the phase centre (where the PB response falls by afactor of e). We note that the antenna response falls sharply after thisradius. For example, the primary beam response is ≲ 0.01 in the firstsidelobe. Hence, outside this “inner” region, only sources with flux densitygreater than 100 mJy are retained for the next step of thesimulation. In the outer region, any source fainter than this will be below thethreshold of point source subtraction due to primary beam attenuation. With343 sources from the “outer” region, we finally include total 696 sourcesin our simulation. Figure <ref> shows the angular positions of all2215 sources over this region, as well as of the 696 sources after the fluxdensity restriction. Note that, we have assumed all the sources are unresolvedat the angular resolution of our simulation. In reality, there will also beextended sources in the field. Some of the extended sources can be modelledreasonably well as collection of multiple unresolved sources. However, othercomplex structures will need more careful modelling or masking, and are notincluded in this simulation for simplicity.The flux density of point sources changes across the frequency band ofobservation. We scale the flux density of the sources at different frequenciesusing the following relation, S_ν=S_ν_0(ν/ν_0)^-α_ ps where ν_0=150MHz is the central frequency of the band, νchanges across the bandwidth of 16 MHz and α_ ps is thespectral index of point sources. The point sources are allocated a randomlyselected spectral index uniform in the range of 0.7 to 0.8<cit.>. Please note that the subsequent point sourcemodeling and subtraction are carried out in such a way that the final outcomesdo not depend on the exact distribution function of the spectral index. §.§ Diffuse Synchrotron EmissionIn this section, we first describe the simulation of the diffuse Galacticsynchrotron emission which are used to generate the visibilities. The angularslope β of the angular power spectrum of diffuse Galactic synchrotronemission is within the range 1.5 to 3 as found by all the previousmeasurements at frequency range 0.15 -94 GHz (e.g.). For the purpose of this paper, we assume that the fluctuationsin the diffuse Galactic synchrotron radiation are coming from a statisticallyhomogeneous and isotropic Gaussian random field whose statistical propertiesare completely specified by the angular power spectrum. We construct our skymodel of the diffuse Galactic synchrotron emission using the measured angular powerspectrum at 150MHz <cit.> C^M_ℓ(ν)=A_ 150×(1000/ℓ)^β×(ν/150 MHz)^-2α_ syn , where ν is the frequency in MHz, A_ 150=513 mK^2and β=2.34 adopted from <cit.> and α_ syn=2.8from <cit.>. The diffuse emissions are generated in a 1024 ×1024 grid with angular grid size of ∼ 0.5^', covering a region of8.7^∘× 8.7^∘. This axis dimension is ≈ 2.8 timeslarger than the FWHM of the GMRT primary beam.To simulate the diffuse emission, we mainly followed the same procedure asdiscussed in <cit.>. We first create the Fourier components of thetemperature fluctuations on a grid using ΔT̃(,̆ν_0)=√(ΩC^M_ℓ(ν_0)/2)[x()̆+iy()̆], where Ω is the total solid angle of the simulated area, and x()̆ andy()̆ are independent Gaussian random variables with zero mean and unitvariance. Then, we use the Fastest Fourier Transform in the West (hereafterFFTW) algorithm <cit.> to convert ΔT̃(,̆ν_0) tothe brightness temperature fluctuations δ T(þ,ν_0) or, equivalently,the intensity fluctuations δ I(þ,ν_0) on the grid. The intensityfluctuations δ I(þ,ν) = (2 k_B/λ^2) δ T(þ,ν) canbe calculated using the Raleigh-Jeans approximation which is valid at thefrequency of our interest.Finally, we generate the specific intensity fluctuations at any other frequencywithin the observing band from that of the reference frequency using thescaling relationδ I(þ,ν)= (2 k_B/λ^2) δ T(þ,ν_0)(ν/ν_0)^-α_ syn . In general, the spectral index α_ syn of the diffuse emission mayhave a spatial variation and the synchrotron power spectrum may be different atdifferent frequencies. However, the effect of this on point source subtractionis expected to be negligible, and the final results do not depend on theconstancy of the synchrotron power spectrum slope. Here, we assume that thevalue of α_ syn is fixed over the whole region and across theobservation band in the multi-frequency simulation.§.§ GMRT Primary beamWe model the PB of GMRT assuming that thetelescope has an uniformly illuminated circular aperture of 45mdiameter (D) whereby the primary beam pattern is given by, A(þ, ν) =[ (2 λ/πθ D) J_1(πθ D/λ) ]^2 where J_1 is the Bessel function of the first kind of order one. The primarybeam pattern is normalized to unity at the pointing center [ A(0)=1]. The central part of the model PB (eq. <ref>) is a reasonably good approximation to the actual PB of the GMRT antenna, whereby, it may vary at the outer region. In our analysis, we taper the outer region through a window function, hence the results are not significantly affected by the use of this approximate model PB.Figure <ref> shows one realization of theintensity fluctuations δ I(þ,ν_0) map at the central frequencyν_0=150MHz with and without multiplication of the GMRT primarybeam. The PB only affect the estimated angular power spectrum at large angularscales (≲ 45 λ) which is shown in Figure 3 of <cit.>. Using a large number of realizations of the diffuseemission map, we find that the recovered angular power spectrum is in goodagreement with the input model power spectrum (eq. <ref>) at thescales of our interest (ℓ∼300-2×10^4).§.§ Simulated GMRT Observation The simulations are generated keeping realistic GMRT specifications in mind,though these parameters are quite general, and similar mock data for any othertelescope can be generated easily. The GMRT has 30 antennas. The diameter ofeachantenna is 45 m. The projected shortest baseline at the GMRT can be 60 m, and the longest baseline is 26km. The instantaneousbandwidth is 16 MHz, divided into 128 channels, centered at 150MHz. We consider all antennas pointed to an arbitrary field locatedat R.A.=10 h 46 m 00 s Dec=59^∘ 00^' 59^” for atotal of 8 hr observation. The visibility integration time waschosen as 16 s. The mock observation produces 783000 samples perchannels in the whole uv range. Figure <ref> shows the full uv coverage at central frequency forthe simulated GMRT Observation. The angular power spectrum of the diffuse synchrotron emission (eq.<ref>) declines with increasing baseline U=||̆ (whereℓ = 2 π U), and drops significantly at the available longest baseline.Hence, for our simulation, the contributions of the diffuse emission have beentaken from only baselines U ≤ 3,000 λ to reduce the computation time.To calculate the visibilities, we multiply the simulated intensity fluctuationsδ I(þ,ν) with the PB (eq. <ref>), and we use 2-D FFTW ofthe product in a grid. For each sampled baseline U ≤ 3,000 λ, weinterpolate the gridded visibilities to the nearest baseline of the uv trackin Figure <ref>. We notice that the w-term does not havesignificant impact on the estimated angular power spectrum of diffusesynchrotron emission <cit.>. But, to make the image properly and alsoto reduce the sidelobes of the point spread function (or the synthesized beam),it is necessary to retain the w-term information. The w-term also improvesthe dynamic range of the image and enhances the precision of point sourcesubtraction. We use the full baseline range to calculate the contribution fromthe point sources. The sky model for the point sources is multiplied with PBA(þ, ν) before calculating the visibilities. Using the smallfield of view approximation, the visibilities for point sources are computed ateach baseline by incorporating the w term: V(,̆ν) ≈∫ d^2 θ A(þ, ν)δ I(þ,ν) e^- 2 π i(uθ_x+vθ_y+w(√(1-θ_x^2-θ_y^2)-1)). The system noise of the interferometer is considered to be independent atdifferent baselines and channels, and is modelled as Gaussian random variable.We add independent Gaussian random noise to both the real and imaginary partsof each visibility. For a single polarization, the theoretical rms noise in thereal or imaginary part of a measured visibility isσ =√(2)k_BT_sys/A_eff√(ΔνΔ t) where T_sys is the total system temperature, k_B is the Boltzmannconstant, A_eff is the effective collecting area of each antenna, Δν is the channel width and Δ t is correlator integration time <cit.>. For Δν=125kHz and Δ t = 16sec, therms noise comes out to be σ_n=1.03 Jy per single polarizationvisibility for GMRT. The two polarizations are assumed to have identical skysignals but independent noise contribution.In summary, the simulated visibilities for the GMRT observation are sum of twoindependent components namely the sky signal and the system noise. As outlinedabove, the realistic sky signal includes the extragalactic point sources andthe Galactic diffuse synchrotron emission. The visibility data does not containany calibration errors, ionospheric effects and radio-frequency interference(RFI), and a detailed investigation of these effects are left for future work. § DATA ANALYSISOur next goal is to analyze the simulated data described above to recover thestatistical properties of the diffuse emission, and compare those with theknown input model parameters. As mentioned earlier, to estimate the powerspectrum of the diffuse emission, our approach is to first remove the pointsource foreground accurately. This requires imaging and deconvolution to modelthe point sources, and then subtracting them from the data. In reality, thereare many issues which make an accurate subtraction of point sources from radiointerferometric wide-field synthesis images challenging. These include residualgain calibration errors <cit.>, direction dependence of thecalibration due to instrumental or ionospheric/atmospheric conditions<cit.>, the effect of spectral index of the sources<cit.>, frequency dependence and asymmetry of the primary beam response,varying point spread function (synthesized beam) of the telescope<cit.>, high computational expenses of imaging alarge field of view, and CLEANing a large number of point sources<cit.> etc.Note that these issues are more prominent at low radio frequencies due to acomparatively large field of view as well as a large number of strong pointsources and bright Galactic synchrotron emission. Hence, foreground is one ofthe major problem particularly in the context of EoR and post-EoR cosmologicalHI studies with the current and future telescopes (e.g. GMRT[Giant Metrewave Radio Telescope; http://www.gmrt.ncra.tifr.res.in], LOFAR[Low Frequency Array; http://www.lofar.org], MWA[Murchison Wide-field Array; http://www.mwatelescope.org], PAPER[Precision Array to Probe the Epoch of Reionization; http://astro.berkeley.edu/dbacker/eor], PaST[Primeval Structure Telescope; http://web.phys.cmu.edu/ past], HERA[Hydrogen Epoch of Reionization Array; http://reionization.org/], and SKA[Square Kilometer Array; http://www.skatelescope.org]).Earlier, <cit.> have studied the effect of calibration errorsin bright point source subtraction. They have concluded that, to detect theEoR signal, sources brighter than 1Jy should be subtracted with apositional accuracy better than 0.1 arcsec if calibration errors remaincorrelated for a minimum time ∼ 6 hours of observation. On the other hand,<cit.> and <cit.> have reported that point sources should besubtracted down to a 10-100mJy threshold in order to detect the 21cm signal from the EoR. It has also been recently demonstrated usingboth simulated and observed data from MWA that foreground (particularly thepoint sources) must be considered as a wide-field contaminant to measure the21cm power spectrum <cit.>. The polarized galacticsynchrotron emission is expected to be Faraday-rotated along the path, and itmay acquire additional spectral structure through polarization leakage at thetelescope. This is a potential complication for detecting the HI signal<cit.>. To cope with the capabilities of current andforthcoming radio telescopes, recently there have been a significant progressin developing calibration, imaging and deconvolution algorithms<cit.> which can now handle some of the above-mentionedcomplications.Keeping aside calibration errors, the problem of subtracting point sourcesultimately reduces to a problem of deconvolution of point sources, in presenceof diffuse (foreground and/or cosmological HI signal) emission, to fit theirposition, flux density and spectral property as accurately as the instrumentalnoise permits. The optimum strategy of modeling and subtracting point sourcesin presence of diffuse emission is an open question in the general context ofinterferometric radio frequency data analysis. In this paper, we take up asystematic analysis of the 150MHz simulated data to quantify effectof incomplete spectral modeling and of different deconvolution strategies tomodel and subtract point sources for recovering the diffuse emission powerspectrum. In particular, we demonstrate the advantage of the power spectrumestimator that we have used (TGE) which allow us to avoid wide field imaging inorder to subtract the point sources from the outer part of the field of view.As a result, it also takes care of, at least to a large extent, issues likeasymmetry of the primary beam, direction dependence of the calibration for theouter region of the field of view and high computational expenses of imagingand removing point sources from a large field of view etc. Below we describethe details of the imaging and point source subtraction used to produce theresidual visibility data for power spectrum estimation. §.§ Imaging and Power spectrum Estimation For our analysis, we use the Common Astronomy Software Applications (CASA)[http://casa.nrao.edu/] to produce the sky images from the simulatedvisibility data. To make a CLEAN intensity image, we use the Cotton-SchwabCLEANing algorithm <cit.> with Briggs weighting and robust parameter0.5, and with different CLEANing thresholds and CLEANing boxes around pointsources. The CLEANing is also done with or without multifrequency synthesis(MFS; ). If MFS is used during deconvolution,it takes into account the spectral variation of the point sources using Taylorseries coefficients as spectral basis functions. In a recent paper<cit.> suggest that CASA's MS-MFS algorithm can be used for betterspectral modelling of the point sources. The large field of view(θ_FWHM= 3.1^∘) of the GMRT at 150MHz lead tosignificant amount of errors if the non-planar nature of the GMRT antennadistribution is not taken into account. For this purpose we use w-projectionalgorithm <cit.> implemented in CLEAN task within the CASA. Fordifferent CLEANing strategies, we assess the impact of point sources removal inrecovering the input angular power spectrum C_ℓ of diffuse Galacticsynchrotron emission from residual uv data. Effectively, by CLEANing withthese different options, we identify the optimum approach to produce the bestmodel for point source subtraction and C_ℓ estimation. We investigatethe CLEANing effects in the image domain by directly inspecting the “residualimages” after the point source subtraction, and also in the Fourier domain bycomparing the power spectrum of the residual data with the input power spectrumof the simulated diffuse emission. For discussion on some of the relevantmethods and an outline of the power spectrum estimation, please see<cit.> and references therein.The left panel of Figure <ref> shows the CLEANed image of thesimulated sky of the target field with angular size 4.2^∘×4.2^∘. The synthesized beam has a FWHM∼ 20^”. The imagecontains both point sources and diffuse synchrotron emission, and the greyscale flux density range in Figure <ref> is saturated at 3mJy to clearly show the diffuse emission. The inner part (≈1.0^∘× 1.0^∘) of CLEANed image has rms noise ≈ 0.3mJy/Beam, and it drops to ≈ 0.15mJy/Beam at the outerpart due to the response of the GMRT primary beam attenuation. In the rightpanel of Figure <ref>, we also show a small portion (marked as asquare box in the left panel) of the image with an angular size 42^'×42^'. We note that there is a strong point source at the centre of thissmall image with a flux density of 676mJy/Beam and spectral index of0.77. The intensity fluctuations of the diffuse emission are also clearlyvisible in both the panels of Figure <ref>.Figure <ref> shows the angular power spectrum C_ℓestimated from the simulated visibilities before any point source subtraction.We find that the estimatedpower spectrum, as expected, is almost flat acrossall angular scales. This is the Poisson contribution from the randomlydistributed point sources which dominate C_ℓ at all angular multipolesℓ in our simulation. In this paper, we do not include the clusteringcomponent of the point sources which becomes dominant only at large angularscales (ℓ≤900) <cit.> where it introduces a power law ℓdependence in the angular power spectrum. We also note that the convolutionwith the primary beam affects the estimated angular power spectrum at smallℓ values (Figure 3, ),and it will be difficult toindividually distinguish the Poisson and the clustered part of the point sourcecomponents with the GMRT. The total simulated power spectrum C_ℓ(Figure <ref>) is consistent with the previous GMRT 150 MHzobservations (). In Figure <ref> we alsoshow the input model angular power spectrum of the diffuse emission along with1-σ error bar (shaded region) estimated from 100 realizations of thediffuse emission map. Note that the angular power spectrum of the diffuseemission is buried deep under the point source contribution which dominates atall angular scales accessible to the GMRT. We emphasis that, in this paper, ouraim is to recover this diffuse power spectrum from the residual visibility dataafter point source subtraction. §.§ Point Source Subtraction As shown in Figure <ref>, the 150MHz radio sky isdominated by point sources at the angular scales ≤ 4^∘ <cit.>.Therefore, it is very crucial to identify all point sources precisely from theimage, and remove their contribution from the visibility data in order toestimate the power spectrum of background diffuse emission. However, it isquite difficult to model and subtract out the point sources from the sidelobesand the outer parts of the main lobe of the primary beam. Our recent paper<cit.> contains a detailed discussion of the real life problems formodelling and subtracting point sources from these regions. In this paper wehave restricted the point source subtraction to the central region of theprimary beam. To estimate the angular power spectrum C_ℓ from thevisibilities, we have used the TGE which tapers the sky response to suppressthe effect of the point sources outside the FWHM of the primary beam. This isachieved by convolving the visibilities with a window function. Note that theTGE is also an unbiased estimator for the angular power spectrumC_ℓ; it calculates and subtract the noise bias self-consistently<cit.>. Below we discuss the point source modelingand the effect of different CLEANing strategies on the “residual” imagescreated from the point source subtracted visibility data.We use standard CASA task CLEAN and UVSUB for deconvolution and removal ofpoint sources from the visibility data respectively. CLEAN identifies pixels with fluxdensity over the specified threshold, do the deconvolution and create thecorresponding model visibilities, while UVSUB produce the residual visibilityby subtracting the model. This should remove the point source contribution fromthe data to a large extent. We further use the residual visibility after pointsource subtraction to make residual “dirty” images (without deconvolution) ofsize 4.2^∘×4.2^∘. This is done using various CLEANingthreshold (0.5, 1.0 and 2.0 mJy where 1mJy≈3σ_im), CLEAN box, and spectral modelling options for comparison. ForCLEAN box, we tried CLEANing the whole image up to the threshold, or usecircular boxes of radius 50^” around all point sources. As expected,the former is more computation expensive and also removes some positive andnegative peaks of the diffuse signal. On the other hand, the later requires apre-existing deep point source catalogue with accurate position of the sources.Note that while such low frequency catalogues for EoR experiments may beavailable from deep continuum surveys in near future, at present it is not arealistic strategy. We also used a hybrid method by first CLEANing the wholeimage up to a conservative flux density threshold, and then placing rectangularCLEAN boxes of size 1.6^'× 1.6^' around all residual point sourcesidentified visually. These selected regions are then CLEANed up to a deeper fluxdensity limit. The effect of spectral modelling is checked by changing theparameter “nterms” in the CASA task CLEAN where nterms=1 does notinclude any spectral correction, while nterms=2 builds the point sourcemodel by including spectral index during multi-frequency CLEANing <cit.>.Table <ref> lists the parameters for a set of CLEANing and point sourcesubtraction runs we used for comparison. Figure <ref> shows arepresentative region of angular size 42^'×42^' from the dirtyimages of the residual data, to illustrate theeffect of different cleaningschemes. The different residual images (Image(a) to Image(f)) inFigure <ref> correspond to the different CLEANing strategies inTable <ref> (Run(a) to Run(f)). The residual images are mostly dominatedby the diffuse emissions. As expected, correct spectral modelling of the pointsources significantly improves the residual image as shown clearly inFigure <ref> top row (left and right panel for nterms=1and 2 respectively). Also, CLEANing the whole image to a deeper flux densitythreshold removes part of the diffuse structure. A shallow threshold, on theother hand, retains the diffuse emission but also significant residual pointsource contribution (see Figure <ref> middle row). Finally, deepCLEANing (∼ 1.5 σ_im) in combination with carefully selectedCLEANing regions results in the optimum residual images shown in the bottomrow of Figure <ref>. In the next section, we assess impact ofthe different CLEAN strategies on the statistics such as distribution ofvisibilities and estimated angular power spectrum from these different residualdata sets. § RESULTSWe use different CLEANing options mentioned above for point source subtractionfrom a 4.2^∘× 4.2^∘ region of the sky from simulatedvisibility data. To compare the outcome of these strategies, we check thestatistics of the residual visibilities as well as of the residual dirtyimages. In Figure <ref> we show the normalized histograms from images(top row) and from the visibility data (bottom row). The top-left panel ofFigure <ref> shows the distribution of the pixel values from theinitial CLEANed map (Figure <ref>) dominated by the diffuseemission (pixels with ≤ 5.0mJy) and only a small number of pixelswith high flux density (due to the bright point sources). The top middle andright panel show the histogram of the residual images from different CLEANingruns. A Gaussian with σ=0.228mJy is a fairly good fit to thedistribution of the residuals up to a flux density limit of ± 0.5 mJy. However, as evident from the top central panel, “blind” CLEANing withlower threshold (see Table <ref>) makes residual images morenon-Gaussian. On the other hand, for deep CLEANing using different CLEANing boxoptions, there is no difference in the distribution of the residual images. The corresponding visibility distribution functions are shown in the bottom rowof Figure <ref>. We use the real part the complex visibilities forthese plots, but the imaginary parts also have a similar distribution. We findthat both the initial and residual visibilities have a Gaussian distribution,but with different standard deviation (σ=1.61 Jy before point sourcesubtraction and 0.76 Jy up to | Re(V)| <3 Jy for the residualvisibility).The counts significantly deviate from Gaussian distribution atlarge visibility values most likely due to incomplete CLEANing.Next we use the residual visibilities from different runs to estimate theangular power spectrum C_ℓ using the TGE. Here, we have used logarithmicintervals of ℓ after averaging all the frequency channels. We have alsoused Gaussian window function to taper the sky response. The tapering isintroduced through a parameter f, where f is preferably ≤ 1 so thatmodified window function cuts off the sky response well before the first nullof the primary beam (see for details, Figure 1 of ). Thereduced field of view results in a larger cosmic variance for the angular modeswhich are within the tapered field of view. So, the tapering parameter f willpossibly be determined by optimizing between the reduced field of view and thecosmic variance. In this work we use f=0.8. Figure <ref> showsthe estimated C_ℓ from the residual visibilities for Run(a) and Run(b),that is for fixed CLEANing threshold of 1.0mJy but nterms=1and 2 respectively. CLEANing with nterms=2 reduces the residualsidelobes around bright sources after point source subtraction (seeFigure <ref>a,b). Hence, as shown in Figure <ref>,the estimated C_ℓ recover the input power spectrum better even at largeℓ (≥ 6× 10^3) clearly demonstrating the need of correct spectralmodelling of the point sources.The left panel of Figure <ref> shows the angular power spectra C_ℓestimated using the residual visibility data obtained from Run(b), (c) and (d)for nterms=2 but different CLEANing threshold. Run(b) with ∼ 3 σ_im CLEANing threshold recovers C_ℓ for the entire range ofℓ, but Run(d) with shallow CLEANing retains some extra residual power atlarge ℓ (≥ 7× 10^3). The estimated C_ℓ from Run(c), on theother hand, falls off by a factor ∼ 5 compared to the input model due toblind deep CLEANing that removes part of the underlying diffuse signal. Theeffect of using different CLEANing box options in recovering C_ℓ isshown in the right panel of Figure <ref>. Here we keep the other two parameters fixedat nterms=2 and threshold of 0.5mJy. It is clear from thisfigure that there is no significant change in the estimated power spectra forthe two different CLEANing box strategies used in Run(e) and (f). In both thecases, the estimated C_ℓ agree well with the input power spectrum overthe full range of ℓ probed here. § SUMMARY AND CONCLUSIONSPrecise subtraction of point sources from wide-field interferometric data isone of the primary challenges in studying the diffuse foreground emission aswell as the weak redshifted HI 21-cm signal. In this paper, we demonstrate themethod of studying and characterizing the Galactic synchrotron emission usingsimulated 150MHz GMRT observation in presence of point sources. Theangular power spectrum C_ℓ of the diffuse emission is estimated from theresidual visibility data using TGE after subtracting the point sources fromonly the inner part of the field of view. The estimated C_ℓ due to faint point sources is much lower than the diffuse synchrotron emission. We assess the impact of imperfectpoint source removal for different CLEANing strategies in recovering the inputC_ℓ of the diffuse emission for the angular scale range probed by theGMRT.The simulations are carried out for GMRT 150 MHz observation for asky model consisting of point sources and diffuse synchrotron emission. The skymodel is multiplied with the model PB A(þ, ν), before computingthe visibilities for the frequency and the uv coverage of the simulated GMRTobservation. We use various CLEANing strategies with different CLEANing boxes,threshold flux and spectral correction options to make images and to subtractpoint source model from the simulated visibilities. The residual data were thenused for estimating C_ℓ of the diffuse component. We check the effect ofpoint source subtraction by comparing image histograms, visibility distributionfunction as well as C_ℓ from the residual data. We find that all the different CLEANing strategies introduce some degree ofnon-Gaussianity in the residual data both in image and in visibility domain.The less precise point source subtraction generates more non-Gaussianity in thedistribution of image-pixels beyond the CLEANing threshold. Equivalently, thevisibility distributions also deviate significantly from a Gaussian. Comparingthe recovered and the input power spectra, we find that both shallow CLEANingand incorrect spectral modelling of the point sources result in excess power atthe large angular multipoles. On the other hand, very deep “blind” CLEANingremoves part of the diffuse structure and reduces the amplitude of the powerspectrum at all angular scale. Carefully choosing CLEAN boxes for deep CLEANing(with threshold ∼ 1.5σ_im) and correct spectral modelling of thepoint sources demonstrate that TGE can recover the input power spectrum of thediffuse emission properly. Note that this analysis also demonstrate that theeffect of the point sources from the outer region of the field is insignificantdue to the tapering. Hence, while using TGE for power spectrum estimation, manyof the complications discussed earlier related to the low frequency wide fieldimaging become irrelevant. Finally, the accurate removal of all the point sources from the wide-fieldimage is complicated and difficult task in presence of instrumentalsystematics, calibration errors, RFI and ionospheric effects etc. Usingsimulated data, we have established here the effectiveness of TGE in estimatingthe angular power spectrum C_ℓ of diffuse emission at the angular scalesprobed by the GMRT. This gives us the confidence to apply it on real data inorder to study the Galactic synchrotron power spectrum <cit.>. Withthe broad goal of applying it in future for EoR and post-EoR HI studies, weplan to next incorporate some of the above mentioned “real world” issues inthis simulation, and also extend this study for the SKA. § ACKNOWLEDGEMENTS SC would like to acknowledge the University Grant Commission (UGC), India forproviding financial support through Senior Research Fellowship. SSA would liketo acknowledge CTS, IIT Kharagpur for the use of its facilities. SSA would alsolike to thank the authorities of the IUCAA, Pune, India for providing theVisiting Associateship programme. AG would like acknowledge Postdoctoral Fellowship from the South African Square Kilometre Array Project for financial support. PD will like to acknowledge the DST-INSPIRE facultyfellowship by Department of Science and Technology, India for providing financial support.model2-names
http://arxiv.org/abs/1706.09033v1
{ "authors": [ "Samir Choudhuri", "Nirupam Roy", "Somnath Bharadwaj", "Sk. Saiyad Ali", "Abhik Ghosh", "Prasun Dutta" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20170627200551", "title": "Validating a novel angular power spectrum estimator using simulated low frequency radio-interferometric data" }
An Efficient Finite Difference Scheme for the 2D Sine-Gordon EquationXiaorong KangSchool of Science, Southwest University of Science and Technology, Mianyang, Sichuan 621010,China ([email protected]) Wenqiang FengDepartment of Mathematics, The University of Tennessee, Knoxville, TN 37996 ([email protected]) Kelong ChengSchool of Science, Southwest University of Science and Technology, Mianyang, Sichuan 621010,China ([email protected]) Chunxiang GuoSchool of Business, Sichuan University, Chengdu, Sichuan 610064, China ([email protected])December 30, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================equationsectionWe present an efficient second-orderfinite difference scheme for solving the 2D sine-Gordon equation, which can inherit the discrete energy conservation for the undamped model theoretically. Due to the semi-implicit treatment for the nonlinear term, it leads to a sequence of nonlinear coupled equations. We use a linear iteration algorithm, which can solve themefficiently, and the contraction mapping property is also proven. Based on truncation errors of the numerical scheme, the convergence analysis in the discrete l^2-normis investigated in detail. Moreover, we carry out various numerical simulations, such as verifications of the second order accuracy, tests of energy conservation and circular ring solitons, to demonstrate the efficiency and the robustness of the proposed scheme. keyword 2D sine-Gordon equation, conservative, difference scheme, linear iteration, convergence.An Efficient Finite Difference Scheme for the 2D Sine-Gordon EquationXiaorong KangSchool of Science, Southwest University of Science and Technology, Mianyang, Sichuan 621010,China ([email protected]) Wenqiang FengDepartment of Mathematics, The University of Tennessee, Knoxville, TN 37996 ([email protected]) Kelong ChengSchool of Science, Southwest University of Science and Technology, Mianyang, Sichuan 621010,China ([email protected]) Chunxiang GuoSchool of Business, Sichuan University, Chengdu, Sichuan 610064, China ([email protected])December 30, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTIONIn this paper, we consider the following 2D sine-Gordon equation,u_tt+β u_t-αΔ u=-ϕ(x,y)sin u+F(x,y,t), (x,y)∈Ω,t≥ 0,with initial conditionsu(x,y,0)=φ_1(x,y), u_t(x,y,0)=φ_2(x,y), (x,y)∈Ω,and the boundary conditionu|_∂Ω=G(t), t≥ 0,where Ω=[0,L]^2. This equation has attracted much attentiondue to the presence of soliton solutions and hasa great deal ofapplications in the propagation of fluxons in Josephson junctions between two superconductors <cit.>,the motion of a rigid pendulum attached to a stretched wire <cit.>, dislocations in crystalsand the stability of fluid motions. Nowadays, it has become one ofparadigms of the nonlinear dynamical system to describe many different physical phenomena <cit.>. In (<ref>),ϕ(x, y) is a nonnegative function with finite bound ϕ_0 and may be interpreted as the Josephson current density, whileφ_1(x, y) and φ_2(x, y) represent wave modes or kinks and velocity, respectively. In particular, when β=0,(<ref>) reduces to the undamped sine-Gordon equation,u_tt-αΔ u=-ϕ(x,y)sin u+F(x,y,t).If F=0 and G is periodic or homogeneous, one of the main properties of the undamped sine-Gordon equation (<ref>) has the conservation for the energy defined as follows<cit.>,E(t) = 1/2∫_Ω [|u_t|^2+|∇ u|^2+2ϕ (1-cos u) ]dxdy = 1/2 (u_t^2_L^2(Ω)+∇ u^2_L^2(Ω) )+∫_Ωϕ (1-cos u)dxdy,which is not valid for thedamped system (<ref>).Recently, various analytical and numerical methods have been proposed for the numerical solution of partial differential equations, for example, the integral transform<cit.> and traveling-wave technologies<cit.>. Analytical solutions to the unperturbed sine-Gordon equation with zero dampinghave been obtained by Lambs methods <cit.> and Bäcklund transformations. Many efforts have been attempted to develop numerical methods, such as the finite difference method <cit.>, the time-splitting pseudospectral and spectral method <cit.>, the finite element method <cit.>, themesh-free reproducing kernel particle Ritz method <cit.>, the local weak meshless method <cit.>, the boundary element method <cit.>, theDifferential quadrature method <cit.> and the radial basis functions method <cit.> for the 2D sine-Gordon equation. However, there exist few available error estimate results in the above-mentioned works for the 2D case. The main reason is that the techniques used for 1D case can not be extended trivially to high dimensions because of the difficulty in obtaining the a priori uniform estimate of the numerical solution.Since the undamped sine-Gordon equation is a conservative system, it should be pointed out that a conservative numerical scheme performs betterthan a nonconservative one. The key is that it can preserve some invariant properties of the differential equation and capture physical procedures with more details <cit.>. Moreover, there has been growing interest in conservative numerical methods for solving partial differential equations. For example, Klein-Gordon equation <cit.>, the high frequency wave phenomena <cit.>, the phase field crystal model <cit.>, and so forth. As for the sine-Gordon equation, to our knowledge, only a few results considered this vital property. Although many verification results of the discrete energy for various numerical methods are reported<cit.>, the analysis at a theoretical level was hardly shown. The main purpose of this paper is to present a second-order semi-implicit finite difference scheme fornumerical solutions of the 2D sine-Gordon equation (<ref>)-(<ref>). There are three main features to this work. The first is that the proposed scheme can admit the discrete energy conversation for the undamped case (<ref>) at a theoretical level, which has not yet been reported in the existing literatures. The second feature of this work is the linear iteration algorithm introduced<cit.> to solve efficiently the nonlinear system at each time step due to the implicit treatment of the nonlinear term. Meanwhile, a careful analysis shows a contraction mapping property of this iteration under the given constrain for the time step. Finally, we provided a detailed convergence analysis for the second-order scheme in the l^2-norm. The remainder of the paper is organized as follows. In Section <ref>, the second-order finite difference scheme is proposed and the energy conservation property for the undamped system is proven. Thelinear iteration algorithm and the correspondingtheoretical analysis of the contraction mapping are given in Section <ref>.Truncation errors and the convergence analysisare discussed in Section <ref>. Some numerical simulation resultsare given to demonstrate the efficiency of the linear iteration solver and the convergence of the schemein Section <ref>. Finally, some conclusions are made in Section <ref>.§ NUMERICAL SCHEME AND ENERGY CONSERVATION§.§ Second order finite difference scheme Let v=u_t.(<ref>) can be rewritten as,v_t+β u_t -αΔ u=-ϕ(x,y)sin u+F(x,y,t), v=u_t.Then, for the given 2D domain Ω, define the uniform numerical grid (x_i,y_j) with Δ x=Δ y= h for simplicity of presentation. Let M_x=M_y=M and M· h=L such that x_i=ih,i=0,1,⋯,M, y_j=jh,j=0,1,⋯,M.For a fixed time T, let Δ t be the step size for temporal direction, t^n=nΔ t, n=0,1,2,⋯,N, N=[T/Δ t], u_i,j^n≈ u(x_i,y_j, t^n). Denote Δ_h=D_xx+D_yy as the standard second order difference operator withD_x u=u_i+1,j-u_i,j/h, D_y u=u_i,j+1-u_i,j/h, D_xx u=u_i+1,j-2u_i,j+u_i-1,j/h^2,D_yy u=u_i,j+1-2u_i,j+u_i,j-1/h^2.The second order finite difference scheme is presented at a point-wise level as follows,v^n+1-v^n/Δ t+βu^n+1-u^n/Δ t-α/2Δ_h (u^n+1+u^n)=ϕcos( u^n+1)-cos (u^n)/u^n+1-u^n+F^n+1/2, u^n+1-u^n/Δ t=v^n+1+v^n/2,withdiscrete initial conditionsu^0_i,j=φ_1(x_i,y_j), v^0_i,j=φ_2(x_i,y_j),and the boundary conditionu^n_i,j|_∂Ω=G(x_i,y_j,t^n),(x_i,y_j)∈∂Ω,where F^n+1/2=F(x_i,y_j,t^n+1/2).Obviously, (<ref>) can be reformulated asv^n+1=2(u^n+1-u^n)/Δ t-v^n.Substituting (<ref>) into (<ref>) yields that2u^n+1/Δ t^2+β/Δ tu^n+1-α/2Δ_h u^n+1=ϕcos( u^n+1) -cos (u^n)/u^n+1-u^n+β/Δ tu^n+α/2Δ_h u^n+κ (u^n,v^n)+F^n+1/2,where κ (u^n,v^n)=2u^n/Δ t+2v^n/Δ t. (<ref>) is nonlinear and can be solved implicitly by a linear iteration algorithm introduced in the next section. Following u^n+1 is solved, v^n+1 can be computed explicitly by (<ref>). The main idea to deal with the sine nonlinearitywas first introduced by W. A. Strauss and L. Vázquez<cit.> to computenumerical solutions of a nonlinear Klein-Gordon equation in which a polynomial nonlinear term is involved. In fact, this subtle technique can achieve perfect numerical solutions for the conservative model and has been extensively studied for some nonlinear problems, such as the Cahn-Hilliard type equation<cit.>. §.§ Discrete energy conservation for the undamped equationAs above mentioned,the undamped sine-Gordon equation with certain boundary conditions admits the property of energy conservation. Here, as the special case of the numerical scheme (<ref>) and (<ref>), the difference scheme for the undamped equationisconservative for the discrete energy.Letting F(x,y,t)=0 and G(t)=0, from (<ref>), we haveu_tt-αΔ u=-ϕ(x,y)sin u,which satisfies with the conservative law (<ref>). And also, the corresponding numerical finite difference scheme is simplified as follows,v^n+1-v^n/Δ t-α/2Δ_h (u^n+1+u^n)=ϕcos( u^n+1)-cos (u^n)/u^n+1-u^n, u^n+1-u^n/Δ t=v^n+1+v^n/2. Next, we introduce the l^2-norm and thel^2 inner product. For any two homogeneous (or periodic) grid functions f and g, define the discrete l^2 inner product and the discrete l^2-norm, respectively, as⟨ f,g ⟩ =h^2∑_i,j=0^M f_i,jg_i,j,f_2=√(⟨ f,f ⟩),and the following summation by parts is also straightforward,⟨Δ_h f,g ⟩ =-⟨∇_h f,∇_h g⟩,with∇_h f^2_2=D_x f^2_2+D_y f^2_2,D_x f^2_2=h^2∑_i,j=0^M (f_i+1,j-f_i,j)^2/h^2,D_y f^2_2 =h^2∑_i,j=0^M (f_i,j+1-f_i,j)^2/h^2. The scheme (<ref>)-(<ref>) is conservative for thediscrete energy, namely,E^n=1/2v^n^2_2+α/2∇_h u^n^2_2+h^2∑_i,j=0^M (ϕ (1-cos(u^n)))_ij= E^0,for n=1,2,⋯,N.The main purpose oftaking this discrete energy form is to be in accordance with its continuous definition (<ref>). Alternatively, if we delete the constantin the term ϕ(1-cos(u^n)), the new discrete energy is also conservative.In <cit.>, the authors had simulated the discrete energy which looks tolikely be conservative. However, it might be caused by the high accuracy of the proposed pseudospectral method. As we know, the explicit numerical scheme could not ensure the conversation generally. Proof. Taking the inner product of (<ref>) with u^n+1-u^n yields⟨v^n+1-v^n/Δ t,u^n+1-u^n ⟩ -α/2⟨Δ_h (u^n+1+u^n),u^n+1-u^n ⟩ - ⟨ϕcos( u^n+1)-cos (u^n)/u^n+1-u^n, u^n+1-u^n⟩ =0.For the first term, we have⟨v^n+1-v^n/Δ t,u^n+1-u^n ⟩ = ⟨v^n+1-v^n,u^n+1-u^n/Δ t⟩ =1/2⟨v^n+1-v^n,v^n+1+v^n⟩= 1/2(v^n+1_2^2-v^n_2^2),where (<ref>) is applied in the second step. According to (<ref>),the second term can be analyzed as-α/2⟨Δ_h (u^n+1+u^n),u^n+1-u^n ⟩=α/2(∇_h u^n+1^2_2-∇_h u^n^2_2).Moreover, for the nonlinear term, we obtain the following result,- ⟨ϕcos( u^n+1)-cos (u^n)/u^n+1-u^n, u^n+1-u^n⟩ =h^2∑_i,j=0^n ϕ_i,j((1-cos u^n+1)-(1-cos u^n))_i,j.By the definition of E^n, (<ref>) is obtained from (<ref>)-(<ref>). § LINEAR ITERATION ALGORITHM Since the nonlinear term cos(u^n+1) is treated implicitly in (<ref>), it leads to a sequence of nonlinear coupled equations. In order to solve the nonlinear system (<ref>) arising from the implicit treatment, we proposed the following linear iteration algorithm:2u^n+1,(m+1)/Δ t^2+β/Δ tu^n+1,(m+1)-α/2Δ_h u^n+1,(m+1)=ϕcos( u^n+1,(m))-cos (u^n)/u^n+1,(m)-u^n+β/Δ tu^n+α/2Δ_h u^n+κ (u^n,v^n)+F^n+1/2,where u^n+1,(m) denotes the approximation solution at the m-th iteration. The linear iteration scheme (<ref>) is a contraction mapping, provided that Δ t < β+√(β^2+8ϕ_0)/2ϕ_0. Proof. Define the iteration error of each stage viae^(m)=u^n+1,(m)-u^n+1,where u^n+1,(m) is the m-th iteration result generated by the linear iteration scheme (<ref>). Subtracting (<ref>) from (<ref>) leads to(2/Δ t^2+β/Δ t-α/2Δ_h)e^(m+1)=ϕ(cos( u^n+1,(m))-cos (u^n)/u^n+1,(m)-u^n-cos( u^n+1)-cos (u^n)/u^n+1-u^n ).Taking the inner product of (<ref>) with e^(m+1), we have⟨ (2/Δ t^2+β/Δ t-α/2Δ_h)e^(m+1),e^(m+1)⟩=(2/Δ t^2+β/Δ t)e^(m+1)^2_2+α/2∇_h e^(m+1)^2_2 =⟨ϕ(cos( u^n+1,(m))-cos (u^n)/u^n+1,(m)-u^n-cos( u^n+1) -cos (u^n)/u^n+1-u^n ),e^(m+1)⟩.Now, we analyze the right-hand side of (<ref>) in detail. For convenience, leth(x)=cos x-cos a/x-a.Using the Lagrange theorem, we obtain h(x)=-sinξ, where ξ is between x and a. We also compute the derivative of h(x),h'(x)=-(x-a)sin x-(cos x-cos a)/(x-a)^2=-sin x-(cos x-cos a)/x-a/x-a=-sin x+sinξ/x-a.Applying the Lagrange theorem again for (-sin x+sinξ) yields|h'(x)|=|cosξ_1|·|x-ξ|/|x-a|<1,where ξ_1 is between x and ξ, and the fact that |x-ξ|< |x-a| is used. Going back to (<ref>) and setting a=u^n, we have|cos( u^n+1,(m))-cos (u^n)/u^n+1,(m)-u^n-cos( u^n+1)-cos (u^n)/u^n+1-u^n|=|h(u^n+1,(m))-h(u^n+1)|=|h'(ξ_2)|· |u^n+1,(m)-u^n+1|<|e^(m)|,with ξ_2 between u^n+1,(m) and u^n+1. In turn, one can get⟨ϕ(cos( u^n+1,(m))-cos (u^n)/u^n+1,(m)-u^n-cos( u^n+1)-cos (u^n)/u^n+1-u^n ),e^(m+1)⟩ ≤ϕ_0 |⟨ e^(m), e^(m+1)⟩| ≤ϕ_0/2(e^(m+1)^2_2+e^(m)^2_2).in which ϕ_0 is the upper bound of ϕ.As a result, it follows from the combination of (<ref>) and (<ref>) that(2/Δ t^2+β/Δ t-ϕ_0/2)e^(m+1)^2_2+1/2∇_h e^(m+1)^2_2 ≤ϕ_0/2e^(m)^2_2.Therefore, the contraction mapping property can be assured if(2/Δ t^2+β/Δ t-ϕ_0/2) > ϕ_0/2,which shows that the result is proven.For a high dimensional problem, either the ADI schemeor the predictor-corrector scheme is often used to implement the implicit finite difference scheme. Both schemes belong to two-step method or multi-step one for reducing the dimension complexity. For 2D sine-Gordon equation, see <cit.>. Without any decompositions for 2D sine-Gordon equation, the implicit scheme can be solved efficiently by the linear iteration. Certainly, this iteration method is also applied in many numerical method, but the contraction condition for iterations is seldom investigated. On the other hand, we also can present the high-order finite difference scheme if more complicated operators are involved, and can solve it by the iteration method. §TRUNCATION ERRORS AND THE CONVERGENCE ANALYSIS §.§ Truncation errors Let u_e and v_e=∂_t u_e beexact solutions of the problem (<ref>)-(<ref>), then truncation errors of the scheme (<ref>) are obtained at discrete grid points as follows,v_e^k+1-v_e^k/Δ t+β/Δ t(u_e^k+1-u_e^k)-α/2Δ_h (u_e^k+1+u_e^k)-ϕcos( u_e^k+1)-cos (u_e^k)/u_e^k+1-u_e^k-F^k+1/2=ρ^k, u_e^k+1-u_e^k/Δ t-v_e^k+1+v_e^k/2=s^k. In fact, for a given function f(x) ∈ C^5, one can getf(ξ)-f(η)/ξ -η=f'(ξ +η/2)+1/24f”(ξ +η/2)(ξ-η)^2+O((ξ-η)^4).Taking f(x)=cos(x), ξ=u_e^k+1 and η=u_e^k, we havecos( u_e^k+1)-cos (u_e^k)/u_e^k+1-u_e^k=-sin(u_e^k+1+u_e^k/2)+O(Δ t^2)=-sin(u_e^k+1/2)+O(Δ t^2).Similarly, other terms can be analyzed and the details are omitted. Hence, we obtain the following lemma.Suppose u_e and v_e are smooth enough, then |ρ^k_ij|+|s^k_ij|=O(Δ t^2+h^2) holds as Δ t, h→ 0. §.§Convergence analysis Definediscrete error functions as follows,u^k=u_e^k-u^k,v^k=v_e^k-v^k. Now, we present the following convergence result.Assume that u_e and v_e=∂_t u_e are the exact solutions of the problem (<ref>)-(<ref>), and denote (u,v) as the numerical solution given by the finite difference scheme (<ref>) and (<ref>). Then, we havev^n^2_2+ u^n_2^2+α∇_hu^n_2^2 ≤ C · O(Δ t^2+ h^2),where the constant C is dependent on the final time T and is independent onΔ t and h. Proof. Subtracting (<ref>)-(<ref>) from (<ref>)-(<ref>), respectively, we getv^k+1- v^k/Δ t+β/Δ t( u^k+1- u^k)-α/2Δ_h ( u^k+1+ u^k)-ϕ (cos( u_e^k+1)-cos (u_e^k)/u_e^k+1-u_e^k-cos( u^k+1)-cos (u^k)/u^k+1-u^k )=ρ^k,u^k+1- u^k/Δ t- v^k+1+ v^k/2=s^k.Taking the inner product of (<ref>) with u^k+1- u^k yields1/Δ t⟨ v^k+1- v^k, u^k+1- u^k⟩+β/Δ t⟨ u^k+1- u^k, u^k+1- u^k ⟩ -α/2⟨Δ_h ( u^k+1+ u^k),u^k+1- u^k⟩ =⟨ϕ (cos( u_e^k+1)-cos (u_e^k)/u_e^k+1-u_e^k-cos( u^k+1)-cos (u^k)/u^k+1-u^k ), u^k+1- u^k⟩ +⟨ρ^k,u^k+1- u^k⟩.Next, we begin to analyze the nonlinear term on the right-hand of (<ref>). Noting that| (cos( u_e^k+1)-cos (u_e^k)/u_e^k+1-u_e^k-cos( u^k+1)-cos (u^k)/u^k+1-u^k )| ≤ | (cos( u_e^k+1)-cos (u_e^k)/u_e^k+1-u_e^k-cos( u^k+1)-cos (u_e^k)/u^k+1-u_e^k )| + | (cos( u^k+1)-cos (u_e^k)/u^k+1-u_e^k-cos( u^k+1)-cos (u^k)/u^k+1-u^k )|and recalling the definition (<ref>) of h(x), we have the following results| (cos( u_e^k+1)-cos (u_e^k)/u_e^k+1-u_e^k-cos( u^k+1)-cos (u_e^k)/u^k+1-u_e^k )| ≤ |h(u_e^k+1)-h(u^k+1)|=|h'(η_1)|· |u_e^k+1-u^k+1|≤ | u^k+1|,in which we choose a=u_e^k, and| (cos( u^k+1)-cos (u_e^k)/u^k+1-u_e^k-cos( u^k+1)-cos (u^k)/u^k+1-u^k )| ≤ |h(u_e^k)-h(u^k)|=|h'(η_2)|· |u_e^k-u^k|≤ | u^k|,in whicha=u^k+1, respectively. Substituting (<ref>) and (<ref>) into (<ref>) gives| (cos( u_e^k+1)-cos (u_e^k)/u_e^k+1-u_e^k-cos( u^k+1)-cos (u^k)/u^k+1-u^k )| ≤ | u^k+1|+| u^k|.Also, it follows from (<ref>) thatu^k+1- u^k=Δ t/2( v^k+1+ v^k)+Δ t s^k.Therefore, we can obtain the estimate as follows,⟨ϕ (cos( u_e^k+1)-cos (u_e^k)/u_e^k+1-u_e^k-cos( u^k+1)-cos (u^k)/u^k+1-u^k ), u^k+1- u^k⟩ ≤ϕ_0 ⟨ | u^k+1|+ | u^k|, | u^k+1- u^k|⟩ ≤ϕ_0/2Δ t ⟨ | u^k+1|+ | u^k|, | v^k+1|+| v^k|⟩ +ϕ_0 Δ t ⟨ | u^k+1|+ | u^k|, |s^k|⟩ ≤ϕ_0/4Δ t( (| u^k+1|+ | u^k|)_2^2+(| v^k+1|+ | v^k|)_2^2)+ϕ_0/2Δ t( (| u^k+1|_2^2+ | u^k|)_2^2) +ϕ_0 Δ t s^k_2^2≤ϕ_0/2Δ t(u^k+1_2^2+u^k_2^2+ v^k+1_2^2+v^k_2^2)+ϕ_0 Δ t(u^k+1_2^2+u^k_2^2) +ϕ_0 Δ t s^k_2^2≤3ϕ_0/2Δ t(u^k+1_2^2+u^k_2^2 )+ϕ_0/2Δ t( v^k+1_2^2+v^k_2^2) +ϕ_0 Δ t s^k_2^2,where the inequality (a+b)^2 ≤ 2(a^2+b^2) is used.For the first term in (<ref>), using (<ref>), we can arrive at,1/Δ t⟨ v^k+1- v^k, u^k+1- u^k⟩=1/2⟨ v^k+1- v^k, v^k+1+ v^k ⟩ +Δ t ⟨ v^k+1- v^k,s^k ⟩ =1/2 ( v^k+1_2^2-v^k_2^2 )+ Δ t ⟨ v^k+1- v^k,s^k ⟩ ≥1/2 ( v^k+1_2^2-v^k_2^2 )- 1/2Δ t( v^k+1- v^k_2^2+s^k_2^2)≥1/2 ( v^k+1_2^2-v^k_2^2 )-Δ t( v^k+1^2_2+ v^k_2^2 )- 1/2Δ t s^k_2^2.And it follows from the second term in (<ref>) thatβ/Δ t⟨ u^k+1- u^k, u^k+1- u^k ⟩=β/Δ t u^k+1- u^k _2^2. As for the diffusion term,the following result is also straightforward from (<ref>),-⟨Δ_h ( u^k+1+ u^k), u^k+1- u^k⟩ = ⟨∇_h ( u^k+1+ u^k),∇_h ( u^k+1- u^k)⟩, = ∇_hu^k+1_2^2-∇_hu^k_2^2.In turn, considering the local truncation error term in (<ref>), we have⟨ρ^k, u^k+1- u^k⟩ = Δ t ⟨ρ^k, u^k+1- u^k/Δ t⟩ =1/2Δ t ⟨ρ^k, v^k+1+ v^k⟩ + Δ t ⟨ρ^k,s^k ⟩≤ 1/4Δ t (ρ^k_2^2+ v^k+1+ v^k_2^2)+1/2Δ t (ρ^k_2^2+s^k_2^2) ≤ 3/4Δ t ρ^k_2^2 +1/2Δ t s^k_2^2+1/2Δ t ( v^k+1_2^2+ v^k_2^2).Consequently, it follows from (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>) that1/2 ( v^k+1_2^2-v^k_2^2 )+α/2 ( ∇_hu^k+1_2^2-∇_hu^k_2^2)+β/Δ t u^k+1- u^k _2^2 ≤3/2ϕ_0 Δ t( u^k+1_2^2+ u^k_2^2)+(3/2+ϕ_0/2)Δ t( v^k+1_2^2+ v^k_2^2)+3/4Δ t ρ^k_2^2+(ϕ_0 +1)Δ ts^k_2^2. Unfortunately, we so far can not obtain the estimate for u^k and v^k by the discrete Gronwall inequality, since the term u^k does not appear in the left-hand part of (<ref>). To remedy this, we need to analyze the estimate for u^k. Going back to (<ref>), itcan be rewritten asu^k- u^k-1=1/2Δ t ( v^k+ v^k-1)+ Δ t s^k-1.Summing up (<ref>) from 1 to k, we getu^k= u^0+1/2Δ t ∑_l=1^k( v^l+ v^l-1)+ Δ t ∑_l=1^k-1 s^l =1/2Δ t ∑_l=1^k( v^l+ v^l-1)+ Δ t ∑_l=1^k-1 s^l,where we apply the fact that u^0=0. In turn, an application of Cauchy inequality implies that| u^k|^2≤2 [1/4Δ t^2 (∑_l=1^k( v^l+ v^l-1))^2+ Δ t^2 (∑_l=1^k-1 s^l)^2]≤ 1/2Δ t^2 ∑_l=1^k 2(| v^l|^2+| v^l-1|^2)+ Δ t^2(∑_l=1^k-1 O(Δ t^2))^2≤2k Δ t^2∑_l=1^k | v^l|^2 + k^2 Δ t^2( O(Δ t^2))^2 ≤2T Δ t ∑_l=1^k | v^l|^2 + T^2 O(Δ t^4),in which kΔ t≤ T is used in the last step. This shows| u^k|^2 ≤ 2T Δ t ∑_l=1^k | v^l|^2 + C· O(Δ t^4),| u^k+1|^2 ≤ 2T Δ t ∑_l=1^k+1 | v^l|^2 + C · O(Δ t^4). Substituting the result above into (<ref>) leads to1/2 ( v^k+1_2^2-v^k_2^2 )+α/2 ( ∇_hu^k+1_2^2-∇_hu^k_2^2)+β/Δ t u^k+1- u^k _2^2≤ 6ϕ_0 T Δ t^2∑_l=0^k+1 v^l_2^2+(3/2+ϕ_0/2)Δ t( v^k+1_2^2 + v^k_2^2)+CΔ t · O(Δ t^4+h^4).Summing over k from 0 to n-1, and using u^0=0,v^0=0, we have1/2 v^n_2^2+ α/2∇_hu^n_2^2 ≤ 6ϕ_0 T Δ t^2∑_k=0^n-1∑_l=0^k+1 v^l_2^2+(3+ϕ_0)Δ t ∑_k=0^n-1 v^k_2^2 +CT · O(Δ t^4+h^4).Then according to∑_l=0^k+1 v^l_2^2 ≤∑_l=0^n v^l_2^2 for ∀ 0≤ k+1 ≤ n, the inequality above can be rewritten as1/2 v^n_2^2+ α/2∇_hu^n_2^2 ≤ (6ϕ_0 T^2+3+ϕ_0) Δ t∑_k=0^n v^k_2^2+CT · O(Δ t^4+h^4).Let E^n= v^n_2^2+α∇_hu^n_2^2. Then wegetE^n ≤ (12ϕ_0 T^2+6+2ϕ_0) Δ t∑_l=0^n-1 E^l+CT · O(Δ t^4+h^4). By the discrete Gronwall inequality, we derive thatv^n_2^2+α∇_hu^n_2^2≤ C · O(Δ t^4+h^4),that is,v^n_2+ α∇_hu^n_2≤ C · O(Δ t^2+h^2),where C is independent on Δ t and h.Moreover, from (<ref>), we have| u^n|^2 ≤ 2T Δ t ∑_l=1^k | v^l|^2 + C· O(Δ t^4)≤ C · O(Δ t^4+h^4).Finally, the combination of (<ref>) and (<ref>) givesv^n_2 + u^n_2 + α∇_hu^n_2≤ C · O(Δ t^2+h^2),which shows the the unconditional convergence inthe sense of l^2-norm is obtained.This completes the proof of Theorem <ref>.For the one dimensional sine-Gordon equation, by the discrete version of the Sobolev imbedding inequality, the ·_∞ estimate can be obtained from the conservative property and the l^2-norm. But for two dimension case, ·_∞ error bounds of the numerical scheme (<ref>) and (<ref>) maybe slightly complicated in obtaining the a priori uniform estimate of the numerical solution. It also can be obtained if we perform a higher consistency analysis by a careful Taylor expansion. The details are skipped for simplicity of presentation and an analogous technique can be seen in <cit.>. § NUMERICAL SIMULATIONSWe now perform a couple of numerical experiments that support the theoretical results and error estimates for the scheme given by (<ref>)-(<ref>). §.§ Verification of the second order accuracyIn the first test, we consider the equation (<ref>) in thedomain Ω=[-1/2,1/2]× [-1/2,1/2] with β=0, ϕ(x, y) = 1, α=1/2π^2 and f(x,y,t)=sin(cos(π x)cos(π y)cos(t)). The exact solution of (<ref>) is given byu_e(x,y,t)=cos(π x)cos(π y)cos(t),v_e(x,y,t)=-cos(π x)cos(π y)sin(t).Theinitial conditions and the boundary condition can be obtained from the exact solution.Fig. <ref> and Fig. <ref> show theprofile of the exact solution u at t=0 on Ω, andthe numerical solutions u^nat time T =0.8, 1.5, 3 with Δ t=0.1 and h=0.025, respectively. The errors in the sense of l^2-norm of the numerical solutions for different mesh steps hand Δ t at time t =1, 2, 3, 4 and 5 can be found in Table <ref>, and the corresponding numerical order of convergence are listed in Table <ref>. Clearly, it verifies thesecond order accuracy in Theorem <ref>. §.§ Energy conservation for the undamped equation In the second test, we consider thehomogeneous boundary condition u|_∂Ω=0 for the equation (<ref>) on Ω=[0,1]×[0,1] with ϕ=1,φ_1(x,y)=sin(2π x)sin(2π y),φ_2(x,y)=0. We take T=1 andthe discrete energy E^n at the different time for Δ t=0.002, h=0.05 and Δ t=0.001,h=0.025 can befound in Table <ref>. Obviously, values of E^n at the differenttimeremain nearly a constant as time increases.§.§ Circular ring soliton The behavior of a circular ring quasi-soliton arising from the sine-Gordon equation is named as waves pulsons because of their pulsating behavior. In this test, we consider theequation(<ref>)onΩ= [-4, 4] × [-4, 4] with ϕ(x, y) = 1. Theinitial conditions are given by,φ_1(x,y)=2arctan(exp(3-5√(x^2+y^2))),φ_2(x,y)=0,and the boundary condition is periodic. Similar to <cit.>, in order to study the evolution of the ring solitons, we plot both the surfaces and the corresponding contours in terms of sin(u/2) with h=0.1 and Δ t=0.1. As seen from Fig.<ref>,Fig.<ref>, Fig.<ref> andFig.<ref>, ring soliton shrinks for initial stage (t = 0), but as time goes on, oscillations and radiations begin to form and continue to form up to t = 4. At t = 6, the graph shows that a ring soliton is nearly formed again. These graphs are consistent with earlier work on this topic in <cit.>. Furthermore, with the implicit treatment and the linear iteration algorithm, it becomes possible to simulate the long time behaviors for such an equation. In Fig.<ref>, the profile of the numerical solutions of u withh=0.1 and Δ t=0.1 at T=50 is presented. § CONCLUSIONIn this paper, we discussed a second-order semi-implicit finite differencescheme for the 2D sine-Gordon equation, which can admit the discrete energy conservation for the undamped problem. We also proposed the efficient linear iteration algorithm for approximating the nonlinear system arising from the implicit treatment of the nonlinear term. Moreover, the iteration algorithm was proven to be a contraction mapping. In turn, based on truncation errors, the convergence analysis of the numerical scheme was also shown. Furthermore, the results of numerical experiments demonstrated the efficiency and the accuracy of our proposed scheme.§ ACKNOWLEDGMENT The authors are very grateful to reviewers for carefully reading this paper and their comments. We appreciate the support provided for this paper by the Science and Technology Department of Sichuan Province in China (No. 2017GZ0316), the funds of Sichuan Center for Education Development Research of Education Department (No. CJF15014), the National Natural Science Funds of China (No. 71471123) and the Fundamental Research Funds for the Central Universities of China (No. skqy201621). plainArg1991J. Argyris, M. Haase, J. C. Heinrich, Finite element approximation to two dimensional sine-Gordon solitons, Comput. Methods Appl. Mech. Eng., 86(1)(1991), 1-26. Asg2013 Z. Asgari, S. M. Hosseini,Numerical solution of two-dimensional sine-Gordon and MBE models using Fourier spectraland high order explicit time stepping methods, Comput. Phys. Commun., 184(3)(2013), 565-572. Bratsos2006A. G. Bratsos, A modified predictor-corrector scheme for the two-dimensional sine-Gordon equation, Numer. Algorithms, 43(4)(2006), 295-308.Bratsos2007A. G. Bratsos,The solution of the two-dimensional sine-Gordon equation using the method of lines,J. Comput. Appl. Math., 206(1)(2007),251-277.Chang1991Q. Chang, G. Wang,B. Guo, Conservative scheme for a model of nonlinear dispersive waves and its solitary waves induced by boundary motion, J. Comput. Phys., 93(2)(1991), 360-375. Chen2016 W. Chen, W. Feng, C. Wang, S. WiseA Second Order Energy Stable Scheme for the Cahn-Hilliard-Hele-Shaw Equations,https://arxiv.org/pdf/1611.02967v1.pdf, (2016).Cheng2015K. Cheng, W. Feng, S. Gottlieb, C. Wang,A Fourier pseudospectral method for the "good" Boussinesq equation with second-order temporal accuracy,Numer. Meth. Partial Diff. Eq., 31(1) (2015), 202-224.Cheng2012 R. J. Cheng, K. M. Liew,Analyzing two-dimensional sine-Gordon equation with the mesh-free reproducing kernel particle Ritz method,Comput. Methods Appl. Mech. Engrg., 245(2012),132-143.Cheng2016K. Cheng, C. Wang, S. M. Wise, X. Yue,A second-order, weakly energy-stable pseudo-spectral scheme for the Cahn-Hilliard equation and its solution by the homogeneous linear iteration method, J.Sci. Comput., 69(3) (2016), 1083-1114.Cui2010 M. R. Cui, High order compact alternating direction implicit method for the generalized sine-Gordon equation, J. Comput. Appl. Math., 235(3)(2010),837-849. Deh2010M. Dehghan, A. Ghesmati,Numerical simulation of two-dimensional sine-Gordon solitons via a local weak meshless technique based onthe radial point interpolation method (RPIM), Comput. Phys. Commun., 181(4)(2010),772-786.Deh2008-1 M. Dehghan, D. Mirzaei,The dual reciprocity boundary element method (DRBEM) for two-dimensional sine-Gordon equation,Comput. Methods Appl. Mech. Engrg., 197(6)(2008), 476-486. Deh2008-2 M. Dehghan, A. A. Shokri,Numerical method for solution of the two-dimensional sine-Gordon equation using the radial basis functions, Math.Comput.Simulat., 79(3)(2008), 700-715. Hu2014J. Hu, K. Zheng, M. Zheng, Numerical simulation and convergence analysis of a high-order conservative difference scheme for SRLW equation, Appl. Math. Model., 38(23)(2014),5573-5581.Jiw2012R. Jiwari,S. Pandit,R. C. Mittal, Numerical simulation of two-dimensional sine-Gordon solitons by differential quadrature method, Comput. Phys. Commun., 183(3)(2012),600-616. Li1995S. Li, L. Vu-Quoc, Finite difference calculus invariant structure of a class of algorithms for the nonlinear Klein-Gordon equation, SIAM J. Numer. Anal., 32(6)(1995), 1839-1875.Liang2017X. Liang, F. Gao, Y.-N Gao, X.-J. Ynag, Applications of a novel integral transform to partial differential equations, J. Nonlinear Sci. Appl.,10 (2017), 528-534.Perring1962J. K. Perring, T. H. Skyrme, A model unified field equation, Nucl. Phys., 31(1962),550-555.Stra1978W. A. Strauss and L. Vázquez, Numerical solution of a nonlinear Klein-Gordon equation, J. Comput. Phys. 28(1978), 271-278. (1978)Wang2015 L. Wang, W. Chen, C. Wang, An energy-conserving second order numerical scheme for nonlinear hyperbolic equation with an exponential nonlinear term, J. Comput. Appl. Math., 280(2015), 347-366.Wang2011C. Wang, S. M. Wise,An energy stable and convergent finite-difference scheme for the modified phase field crystal equation,SIAM J. Numer. Anal., 49(3)(2011), 945-969.Whitham1999 G. B. Whitham,Linear and Nonlinear Waves, Wiley-Interscience, New York, NY, (1999).Xin2000 J. X. Xin, Modeling light bullets with the two-dimensional sine-Gordon equation, Physica D, 135(3)(2000), 345-368.Yang20161X. J. Yang,A new integral transform method for solving steady heat-transfer problem, Therm. Sci., 20(3)(2016), S639-S642.Yang20162X. J. Yang,A new integral transform with an application in heat-transfer problem, Therm. Sci., 20(3)(2016), S677-S681.Yang2017X. J. Yang, A new integral transform operator for solving the heat-diffusion problem, Appl. Math. Lett., 64(2017), 193-197.Yang20172X. J. Yang, F. Gao£¬H. M. Srivastava,Exact travelling wave equations for the local fractional two-dimensional Burgers-type equations, Comput. Math. Appl. 73 (2017), 203-210.Yang20163X. J. Yang, J. A.Tenreiro Machado, D. Baleanu, et al.,On exact traveling-wave solutions for local fractional Korteweg-de Vries equation, Chaos, 26(8), (2016), 110-118.Zag1979J. Zagrodzinsky, Particular solutions of the sine-Gordon equation in 2+1 dimensions, Phys. Lett., 72(4)(1979),284-286.Zheng2013K. Zheng, J. Hu, High-order conservative Crank-Nicolson scheme for regularized long wave equation, Adv. Differ. Equ., 2013(2013), 11 pages.
http://arxiv.org/abs/1706.08632v1
{ "authors": [ "Xiaorong Kang", "Wenqiang Feng", "Kelong Cheng", "Chunxiang Guo" ], "categories": [ "math.NA" ], "primary_category": "math.NA", "published": "20170627003934", "title": "An Efficient Finite Difference Scheme for the 2D Sine-Gordon Equation" }
Primary 37A35, 37B10, 92B20 [Jung-Chao Ban]Department of Applied Mathematics, National Dong Hwa University, Hualien 97401, Taiwan, ROC. [email protected] Chang]Chih-Hung Chang* *Author to whom any correspondence should be addressed. [Chih-Hung Chang and Nai-Zhu Huang]Department of Applied Mathematics, National University of Kaohsiung, Kaohsiung 81148, Taiwan, ROC. [email protected]; [email protected] =1.5It has been demonstrated that excitable media with a tree structure performed better than other network topologies, it is natural to consider neural networks defined on Cayley trees. The investigation of a symbolic space called tree-shift of finite type is important when it comes to the discussion of the equilibrium solutions of neural networks on Cayley trees. Entropy is a frequently used invariant for measuring the complexity of a system, and constant entropy for an open set of coupling weights between neurons means that the specific network is stable. This paper gives a complete characterization of entropy spectrum of neural networks on Cayley trees and reveals whether the entropy bifurcates when the coupling weights change. Entropy bifurcation of neural networks on Cayley trees Nai-Zhu Huang January 15, 2018 ====================================================== § INTRODUCTION The human brain has recently been revealed as a system exhibiting traces of criticality; the corresponding spatiotemporal patterns are fractal-like. Gollo et al. <cit.> infer that criticality may arise from balanced dynamics within individual neurons. Neural networks have been developed to mimic brain behavior for the past few decades; they are widely applied in many disciplines such as signal propagation between neurons, deep learning, image processing, and information technology <cit.>. Chernihovskyi et al. <cit.> implement cellular neural networks on simulating nonlinear excitable media and develop a relevant device to predict epileptic seizures.The overwhelming majority of neural network models adopts an n-dimensional lattice as the network's topology. Gollo et al. <cit.> propose a neural network with a tree structure; excitable media with a tree structure performed better than other network topologies since it attains larger dynamic range (cf. <cit.>). It is of interest to ask the following problem. Problem 1. How to measure the complexity of a tree structure neural network?Alternatively, it is important to know how much information the neural network could store. On the other hand, it is of interest to know whether a neural network “avalanches", which means such a network is sensitive. More precisely, some small modification of parameters could lead to tremendously different dynamics such as the exponential decay of storage of information. One of the most frequently studied neural networks is the Hopfield neural network consisting of locally coupled neurons, in which the behavior of each neuron is represented by a differential equation. Beyond being essential for understanding the dynamics of differential equations, the investigation of equilibrium solutions is related to elucidating the long-term memory of brain. Whenever there are only finitely many equilibrium solutions, the investigation of equilibrium solutions is then equivalent to studying shift spaces in symbolic dynamical systems.A one-dimensional shift space is a set consisting of right-infinite or bi-infinite words which avoid words in a so-called forbidden set ℱ and is denoted by 𝖷_ℱ. A shift space 𝖷_ℱ is called a shift of finite type (SFT) if ℱ is a finite set. A significant invariant of shift spaces is the topological entropy, which reflects how much information a network can store. While there is an explicit formula for the entropy of 1-d SFTs, there is no algorithm for the computation of the topological entropy of multidimensional SFTs so far (cf. <cit.>).Aubrun and Béal <cit.> introduce the notion of tree-shifts, which are shift spaces defined on Cayley trees, and then study the classification theory up to conjugacy, languages, and its application to automaton theory. It is noteworthy that such tree-shifts constitute an intermediate class in between one-sided and multidimensional shifts. Ban and Chang <cit.> propose an algorithm for computing the entropy of a tree-shift of finite type (TSFT). The computation of the rigorous value of entropy is tricky due to the double exponential growth rate of the patterns for a TSFT (see Section 2 for more details).For the case where TSFTs come from the equilibrium solutions of neural networks (on Cayley trees), the forbidden sets are constrained by the so-called separation property; this makes the entropy spectrum discrete (Theorem <ref>). Elucidating the phenomenon of “neural avalanches” is related to the study of entropy bifurcation or entropy minimality problems. It is known that an irreducible ℤ^1 SFT is entropy minimal; that is, any proper subshift Y ⊂ X has smaller entropy than that of an irreducible SFT X. For r ≥ 2, every ℤ^r SFT having the mixing property called uniform filling property is entropy minimal while there is a non-trivial block gluing ℤ^r SFT which is not entropy minimal. Readers are referred to <cit.> for more details. Proposition <ref> gives an explicit formula for the coupling weights between neurons which make CTNNs entropy minimal, and the entropy bifurcation diagram is revealed (Figure <ref>). A remarkably novel phenomenon is that the entropy of a CTNN with the nearest neighborhood is either 0 or ln d, where d is the number of children of each node.The structure of this paper is as follows. Section 2 introduces the notion of tree-shifts and the algorithm for the computation of entropy of TSFTs. Section 3, aside from demonstrating how the investigation of the equilibrium solutions of CTNNs relates to the discussion of TSFTs, studies the learning problem of CTNNs; the necessary and sufficient condition of the forbidden sets of TSFTs corresponding to CTNNs is revealed. After demonstrating the discreteness of entropy spectrum of CTNNs, the entropy minimality problem is affirmatively solved in Section 3. Conclusion and discussion are given in Section 4. § SYMBOLIC DYNAMICS ON CAYLEY TREESThis section recalls some definitions and results of symbolic dynamics on Cayley trees. A novel phenomenon about the entropy spectrum of tree-shifts of finite type is demonstrated herein.§.§ Definitions and NotationsA Cayley tree, roughly speaking, is a graph without cycles. Two kinds of Cayley trees are mostly discussed: rooted Cayley trees and bi-rooted Cayley trees. A rooted d-ary Cayley tree (Figure <ref>) can be seen as a directed graph such that the outdegree of each vertex is d while a bi-rooted d-ary tree (also known as Bethe lattice, see <cit.> for more details) is an undirected graph such that the degree of each vertex is d+1. In this paper, we focus on the rooted Cayley tree for clarity, and the discussion can extend to the Bethe lattice. In the rest of this elaboration, we refer to rooted Cayley tree as Cayley tree unless otherwise stated.Alternatively, a Cayley tree of order d is a free semigroup Σ^* generated by Σ = {g_1, g_2, …, g_d}, where d ∈ℕ. A labeled tree t over a finite alphabet 𝒜 is a function from Σ^* to 𝒜; a node of a labeled tree is an element of Σ^*, and the identity element relates to the root of the tree. Suppose x = x_1 x_2 … x_i, y = y_1 y_2 … y_j ∈Σ^* are nodes of a tree, we say that x is a prefix of y if and only if i ≤ j and x_k = y_k for 1 ≤ k ≤ i, and xy = x_1 ⋯ x_i y_1 ⋯ y_j means the concatenation of x and y. A subset L ⊂Σ^* is called prefix-closed if the prefix of every element of L belongs to L. A pattern is a function u: L →𝒜 with support L and is called an (n+1)-block if its support L = x Δ_n for some x ∈Σ^*, where Δ_n = {y = y_1 y_2 ⋯ y_n: y_i ∈Σ⋃{e}}.Suppose that u is a pattern and t is a labeled tree. Let s(u) denote the support of u. We say that u is accepted by t if there exists x ∈Σ^* such that u_y = t_xy for every node y ∈ s(u). In this case, we say that u is a pattern of t rooted at the node x. A tree t is said to avoid u if u is not accepted by t; otherwise, u is called an allowed pattern of t (see Figure <ref> for instance).We denote by 𝒯 (or 𝒜^Σ^*) the set of all labeled trees on 𝒜. The shift transformation σ: Σ^* ×𝒯→𝒯 is defined by (σ_w t)_x = t_wx for all w, x ∈Σ^*. Given a collection of patterns ℱ, let 𝖷_ℱ denote the set of trees avoiding any element of ℱ. A subset X ⊆𝒯 is called a tree-shift if X = 𝖷_ℱ for some ℱ. We say that ℱ is a set of forbidden patterns (or a forbidden set) of X. A tree-shift X = 𝖷_ℱ is called a tree-shift of finite type (TSFT) if the forbidden set ℱ is finite; we say that 𝖷_ℱ is a Markov tree-shift if ℱ consists of two-blocks. Ban and Chang <cit.> demonstrate that every TSFT can be treated as a Markov tree-shift after recoding, which extends a classical result in symbolic dynamical systems. Every tree-shift of finite type is conjugated to a Markov tree-shift. Proposition <ref> indicates that the investigation of Markov tree-shifts is essential for characterizing TSFTs. For the rest of this paper, a TSFT is referred to as a Markov tree-shift unless otherwise stated.§.§ Entropy of tree-shifts An important invariant of shift spaces is topological entropy, which measures the growth rate of the number of the admissible patterns. Such an invariant reflects the complexity on its own right, we refer readers to <cit.> for more details. The entropy of tree-shifts is defined ash(X)=lim sup_n →∞ln^2 |B_n(X)|/n,where B_n(X) is the collection of n-blocks of X, |B_n(X)| means the cardinality of B_n(X), and ln^2 = ln∘ln. Ban and Chang indicate that the limit h(X)=lim_n→∞ln^2 |B_n(X)|/n exists if X is a TSFT and h(X) ∈{0, ln 2} for each TSFT X when d=2 <cit.>; furthermore, a sufficient condition for positive entropy is revealed <cit.>.For the computation of entropy, Ban and Chang introduce the notion of system of nonlinear recursive equations. Given k ∈ℕ, we say that a sequence {α_1;n, α_2;n, …, α_k;n}_n ∈ℕ forms a system of nonlinear recursive equations (SNRE) of degree (d, k) ifα_i;n = F_i(n) for n ≥ 2, 1 ≤ i ≤ k,with initial condition α_i;1∈ℕ for 1 ≤ i ≤ k, whereF_i(n) = ∑_c_1 + c_2 + ⋯ + c_k = d r_i; c_1, …, c_kα_1; n-1^c_1α_2; n-1^c_2⋯α_k; n-1^c_kwith r_i; c_1, …, c_k∈ℤ^+. Let F = {F_1, F_2, …, F_k} be defined in Definition <ref>. We also say that the sequence {α_1;n, α_2;n, …, α_k;n}_n ≥ℕ is defined by F. For simplicity, F is called the SNRE corresponding to X. Suppose that F is given. For 1 ≤ i ≤ k, we define the indicator vector v_F_i of F_i as v_F_i = (r_i; c_1, …, c_k). Note that the indicator vector v_F_i is unique up to permutation. For the convenience, we represent the indicator vector with respect to the lexicographic order. The matrix I_F = [ v_F_1; v_F_2; ⋮; v_F_k ] is called the indicator matrix of F. For example, suppose that the sequence {α_1;n, α_2;n}_n ≥ℕ forms the SNRE{[ α_1;n = F_1 = α_1;n-1^2 +α_2;n-1^2,; α_2;n = F_2 = 2 α_1;n-1α_2;n-1,;α_1;1 = α_2;1 = 1. ] .Then the corresponding indicator matrix isI_F= [ 1 0 1; 0 2 0 ]. Suppose X is a TSFT over 𝒜 = {a_1, a_2, …, a_k}. LetX_i = {t ∈ X: t_ϵ = a_i}andγ_i;n = |B_n(X_i)|for 1 ≤ i ≤ k. It follows immediately that {γ_1;n, γ_2;n, …, γ_k;n}_n ∈ℕ forms an SNRE. Furthermore, every SNRE of degree (d, k) can be realized via a TSFT (cf. <cit.>). Let F = {F_1, …, F_k} be the representation of the SNRE of X. A subsystem called a reduced system of nonlinear recursive equations of F is defined as follows. Suppose X is a TSFT, and F is the SNRE corresponding to X with indicator matrix I_F. We call E a reduced system of nonlinear recurive equations of F if E is an SNRE such that I_E is a binary matrix satisfying the following conditions. * I_E≤ I_F;* there is exactly one nonzero entry in each row of I_E;* the initial condition of the sequence defined by E is the same as the one defined by F.Herein, two matrices A, B ∈ℤ^m × n with A ≤ B means that A(i, j) ≤ B(i, j) for 1 ≤ i ≤ m, 1 ≤ j ≤ n. Beyond defining the indicator matrix of an SNRE, a k× k nonnegative integral matrix M_E, called the weighted adjacency matrix, of a reduced SNRE E, is defined asM_E (i, j) = max{m: α_j;n-1^m|E_i},1 ≤ i, j ≤ k.For example, consider the SNRE F={F_i}_i=1^2 defined in (<ref>) with indicator matrixI_F= [ 1 0 1; 0 2 0 ].Then a reduced SNRE E of F with indicator matrixI_E = [ 1 0 0; 0 1 0 ]defines a sequence {β_1;n, β_2;n}_n ∈ℕ as follows.{[β_1;n = E_1 = β_1;n-1^2,; β_2;n = E_2 = β_1;n-1β_2;n-1,;β_1;1 = β_2;1 = 1. ] .Furthermore, the weighted adjacency matrix M_E of E isM_E = [ 2 0; 1 1 ]. A symbol a_i ∈𝒜 is called essential if γ_i;n≥ 2 for some n ∈ℕ; otherwise, we say that a_i is inessential. Suppose that, for a TSFT X over 𝒜, each symbol in 𝒜 is essential. Let X be a TSFT and let F be the representation of the SNRE of X. If every symbol is essential, then h(X) = max{lnρ _M_E: Eis a reduced SNRE ofF},where M_E is the weighted adjacency matrix of E and ρ _M_E is the spectral radius of M_E. Suppose that, for a TSFT X, there are some inessential symbols, say, a_p_1, …, a_p_j. Ban and Chang demonstrate that Theorem <ref> still works provided, in (<ref>), M_E is replaced by M_E', where M_E' is the matrix obtained by deleting all the rows and columns indexed by those inessential symbols. Readers are referred to <cit.> for more details. Suppose that d = 3, k = 4. Let X be a TSFT corresponds to the SNRE{[ γ_1;n = γ_1;n-1γ_2;n-1γ_4;n-1 + γ_4;n-1^3,;γ_2;n = γ_3;n-1γ_4;n-1^2 + γ_4;n-1^3,;γ_3;n = γ_1;n-1^2γ_2;n-1 + γ_4;n^3,; γ_4;n = γ_4;n-1^3,;γ_i;1 = 1, 1 ≤ i ≤ 4. ] .It is easily seen that a_1, a_2, a_3 are essential symbols and a_4 is inessential. The weighted adjacency matrix M_E of the reduced SNRE E which reaches the maximum in (<ref>) isM_E = [ 1 1 0 1; 0 0 1 2; 2 1 0 0; 0 0 0 3 ].Since a_4 is inessential, we replace M_E withM_E' = [ 1 1 0; 0 0 1; 2 1 0 ].Theorem <ref> shows that the entropy of X is h(X) = lnρ_M_E'≈ln 1.839, where ρ_M_E' is the maximal root of x^3 - x^2 - x - 1 = 0. Suppose X is a tree-shift of finite type and let F be the representation of the SNRE of X. If every symbol is essential, then h(X) = ln d. It suffices to show that there exists a reduced SNRE E of F such that h(E) = ln d since h(X) ≤ln d (cf. <cit.>). Let E be a reduced SNRE of F. Then the weighted adjacency matrix M_E satisfies ∑_j=1^k M_E(i, j) = d for 1 ≤ i ≤ d. Since every symbol is essential, Theorem <ref> infers that h(X) ≥lnρ_M_E, where ρ_M_E is the spectral radius of M_E. This completes the proof since ρ_M_E = d. Proposition <ref> infers the rigidity of entropy since it is a constant (ln d) whenever there is no inessential symbol. LetD = {M ∈ℳ_ℓ×ℓ(ℤ^+): ∑_q=1^ℓ M(p, q) ≤ dfor1 ≤ p ≤ℓ, ℓ≤ k}consist of nonnegative integral matrices whose dimension is less than or equal to k, and the summation of each row is less than or equal to d. Theorem <ref> illustrates a complete characterization of the entropy of TSFTs. Let H = {h(X): Xis a TSFT} be the entropy spectrum of TSFTs and D is defined as in (<ref>). ThenH = {lnρ: ρ is the spectral radius ofM ∈ D}.More specifically, H = {lnλ: 1 ≤λ≤ d} if k = 2. We start with demonstrating the case where k = 2 to clarify our idea. If a_1 and a_2 are both essential symbols, Proposition <ref> indicates that h(X) = ln d. On the other hand, it is easily seen that h(X) = 0 provided a_1 and a_2 are both inessential. It remains to consider the case where exactly one symbol is essential.Without loss of generality, we assume that a_1 is the essential symbol. It follows immediately from a_2 being inessential that, if u ∈ℱ such that u_ϵ = a_2, u_x = a_1 for some x = 1, 2, …, d. The SNRE of X is then as follows.{γ_1;n = ∑_c=0^d ℓ_c γ_1;n-1^c γ_2;n-1^d-c,γ_2;n = γ_2;n-1^d, n ≥ 2,γ_i;1 = 1, 1 ≤ i ≤ 2. .Since a_1 is essential, there exists c < d such that ℓ_c > 0. Let c = max{c: ℓ_c > 0} and let E be the representation of the following reduced SNRE.{β_1;n = ℓ_cβ_1;n-1^cβ_2;n-1^d-c,β_2;n = β_2;n-1^d. .It follows from M_E = [ c d - c; 0 d ] and M_E' = (c) that h(X) = lnc. This shows that H ⊆{lnλ: 1 ≤λ≤ d}.Conversely, for 1 ≤ c ≤ d, let X be a TSFT correspond to the SNRE{γ_1;n = γ_1;n-1^c γ_2;n-1^d-c + γ_2;n-1^d,γ_2;n = γ_2;n-1^d,γ_i;1 = 1, 1 ≤ i ≤ 2. .Then h(X) = ln c. The proof of H = {lnλ: 1 ≤λ≤ d} is thus complete.Generally, the SNRE of X is seen as{ γ_i;n = ∑_𝐜ℓ_𝐜∏_j=1^k γ_j;n-1^c_i,j, 𝐜 = (c_i,j)satisfies ∑_j=1^k c_i, j = d, n ≥ 2, γ_i;1 = 1, 1 ≤ i ≤ k. .It suffices to consider the case where there is inessential symbol. Without loss of generality, we may assume that a_1, …, a_ℓ are essential and a_ℓ + 1, …, a_k are inessential for some 1 ≤ℓ≤ k-1. For 1 ≤ i ≤ k, let 𝐜_i = (c_i, 1, …, c_i, k) satisfy ∑_j=1^ℓ c_i,j≥∑_j=1^ℓ c_i,j' for all 𝐜' = (c_i, 1', …, c_i, k'), and let E be the SNRE{ β_i;n = ℓ_𝐜_i∏_j=1^k γ_j;n-1^c_i,j, n ≥ 2, β_i;1 = 1, 1 ≤ i ≤ k. .It is seen that h(X) = lnρ, where ρ is the spectral radius of M = (c_i, j)_1 ≤ i, j ≤ℓ. This elaboratesH ⊆{lnρ: ρ is the spectral radius ofM ∈ D}.The demonstration of H ⊇{lnρ: ρ is the spectral radius ofM ∈ D} is similar to the discussion above, thus it is omitted. This completes the proof. Suppose that X = 𝖷_ℱ is a TSFT over 𝒜 = {a_1, …, a_k}. Recall that ℱ consists of 2-blocks. Let ℬ = 𝒜^Δ_1∖ℱ. For 𝒜' ⊆𝒜, setℬ|_𝒜' = {u ∈ℬ: u_x ∈𝒜'forx ∈Δ_1},and let X|_𝒜' denote the subshift generated by ℬ|_𝒜'. We say that 𝒜' is essential if every symbol a ∈𝒜' is essential. This section ends with Corollary <ref>, which comes immediately from the proof of Theorem <ref> and is useful in the investigation of the entropy minimality problem of neural networks on Cayley trees. Suppose that X = 𝖷_ℱ is a TSFT over 𝒜 = {a_1, …, a_k}. Then h(X) = ln d if and only if X|_𝒜' is nontrivial for some essential set 𝒜' ⊆𝒜. More specifically, when k=2, h(X) = ln d if and only if 𝒜 is essential or the 2-block u ∈ℬ with u_x = a for x ∈Δ_1 and a is essential. Obviously, X|_𝒜' being nontrivial for some essential set 𝒜' ⊆𝒜 infers that h(X) = ln d. For the converse direction, it suffices to show the case where k = 2, the general cases can be derived analogously. If a_1 and a_2 are inessential, then h(X) = 0, which is a contradiction. Without loss of generality, we may assume that a_1 is essential.If a_2 is essential, Proposition <ref> demonstrates that h(X) = ln 2. Otherwise, h(X) = ln d infers that the system{γ_1;n = γ_1;n-1^d,γ_2;n = γ_2;n-1^d, .must be a reduced SNRE of the original system. This derives the desired result.§ NEURAL NETWORKS ON CAYLEY TREES The overwhelming majority of models of neural networks are defined on ℤ^n lattice. While it is known that the characteristic shape of neurons is tree <cit.>, this section considers neural networks defined on Cayley trees. A neural network on Cayley tree (CTNN) is represented asddt x_w(t) = - x_w(t) + z + ∑_v ∈𝒩 a_v f(x_wv(t)),w ∈Σ^*,for some finite set 𝒩⊂Σ^* known as the neighborhood, v ∈𝒩, and t≥ 0. Herein, x_w(t) ∈ℝ represents the internal status of neuron at w; the map f(s) is called the output function or activation function, and z ∈ℝ is called the threshold. The weighted parameters A = (a_v)_v ∈𝒩, a_v ∈ℝ, is called the feedback template, andFigure <ref> shows the connection of a binary CTNN with the nearest neighborhood. Equation (<ref>) is derived by adopting Hopfield's neural network (<cit.>) on the Cayley tree. Normatov and Rozikov <cit.> show that harmonic functions on Cayley trees, which is a discrete time version of (<ref>), are periodic with respect to normal subgroups of finite index. The present paper investigates the complexity of output solutions with respect to the output functionf(s)=12(|s+1|-|s-1|)which is proposed by Chua and Yang <cit.> and is widely applied to many disciplines such as signal propagation between neurons, pattern recognition, and self-organization.A mosaic solution x = (x_w)_w ∈Σ^* of (<ref>) is an equilibrium solution which satisfies |x_w| > 1 for all w ∈Σ^*; its corresponding pattern y = (y_w)_w ∈Σ^* = (f(x_w))_w ∈Σ^* is called a mosaic output pattern. Since the output function (<ref>) is piecewise linear with f(s)=1 (resp. -1) if s ≥ 1 (resp. s ≤ -1), the output of a mosaic solution x = (x_w)_w ∈Σ^* must be an element in { -1,+1}^Σ^*, which is why we call it a pattern. Given a CTNN, we refer to 𝐘 as the output solution space; namely,𝐘 = { (y_w)_w ∈Σ^*: y_w = f(x_w)and(x_w)_w ∈Σ^* is a mosaic solution of (<ref>)} . §.§ Learning problem of neural networks on Cayley trees Learning problems (also called the inverse problems) are some of the most investigated topics in a variety of disciplines. From a mathematical point of view, determining whether a given collection of output patterns can be exhibited by a CTNN is essential for the study of learning problems. This section reveals the necessary and sufficient conditions for the capability of exhibiting the output patterns of CTNNs. The discussion is similar to the investigation in <cit.>, thus we only sketch the key procedures of the learning problems of CTNNs with the nearest neighborhood, namely, 𝒩 = Σ⋃{e}, for the compactness and self-containedness of this paper. A CTNN with the nearest neighborhood is realized asddt x_w(t) = - x_w(t) + z + a f(x_w(t)) + ∑_i=1^d a_i f(x_wi(t)),where a, a_1, …, a_d ∈ℝ and w ∈Σ^*. Considering the mosaic solution x = (x_w)_w ∈Σ^*, the necessary and sufficient conditions for y_w = f(x_w) = 1 isa - 1 + z > - ∑_i=1^d a_i y_wi.Similarly, the necessary and sufficient conditions for y_w = f(x_w) = -1 isa - 1 - z > ∑_i=1^d a_i y_wi. LetV^n = { v ∈ℝ^n : v = (v_1, …, v_n),and|v_i| = 1, 1 ≤ i ≤ n }.Let α = (a_1, …, a_d) represent the feedback template without the self-feedback parameter a. The basic set of admissible local patterns with the “+" state in the parent neuron is denoted asℬ_+( A, z) = {v ∈ V^d: a - 1 + z > -α· v },where “·" is the inner product in Euclidean space. Similarly, the basic set of admissible local patterns with the “-" state in the parent neuron is denoted asℬ_-( A, z) = {v ∈ V^d: a - 1 - z > α· v }.Furthermore, the admissible local patterns induced by (A, z) can be denoted byℬ(A, z) = ℬ_+( A, z) ⋃ℬ_-( A, z),whereℬ_+( A, z)= {v: v_ϵ = 1and(v_1, …, v_d) ∈ℬ_+( A, z)},ℬ_-( A, z)= {v: v_ϵ = -1and(v_1, …, v_d) ∈ℬ_-( A, z)}.Note that ℬ(A, z) consists of two-blocks over 𝒜 = {1, -1}. For simplicity, we omit the parameters (A, z) and refer to ℬ as ℬ(A, z).Suppose U is a subset of V^n, where n ≥ 2 ∈ℕ. Let U^c = V^n ∖ U. We say that U satisfies the linear separation property if there exists a hyperplane H that separates U and U^c. More precisely, U satisfies the separation property if and only if there exists a linear functional g(z_1, z_2, …, z_n) = c_1 z_1 + c_2 z_2 + ⋯ + c_n z_n such thatg(v) > 0 for v ∈ U and g(v) < 0 for v ∈ U^c.Figure <ref> interprets those U ⊂ V^2 satisfying the linear separation property.Proposition <ref> elucidates the necessary and sufficient condition for the learning problems of CTNNs; such a property holds for arbitrary neighborhood 𝒩 provided 𝒩 is prefix-closed. The proof of Proposition <ref> is similar to the discussion in <cit.>, thus it is omitted. A collection of patterns ℬ = ℬ_+ ⋃ℬ_- can be realized in (<ref>) if and only if either of the following conditions is satisfied: * -ℬ_+ ⊆ℬ_- and ℬ_- satisfies linear separation property;* -ℬ_- ⊆ℬ_+ and ℬ_+ satisfies linear separation property.Letℝ^d+2 = { (A, z) | A ∈ℝ^d+1, z ∈ℝ}denote the parameter space. Theorem <ref> demonstrates that ℝ^d+2 can be partitioned into finitely equivalent sub-regions such that two sets of parameters induce identical basic sets of admissible local patterns if and only if they belong to the same partition in the parameter space. We skip the proof of Theorem <ref> for the compactness of this paper since the demonstration is similar to ths discussion in <cit.>. There exists a positive integer K = K(d) and a unique collection of open subsets {P_i}_i=1^K of the parameter space (<ref>) satisfying * ℝ^d+2 = ⋃_i=1^K P_k;* P_i ⋂ P_j = ∅ for all i ≠ j;* ℬ(A, z) = ℬ(A', z') if and only if (A, z), (A', z') ∈ P_i for some 1 ≤ i ≤ K.Herein, P indicates the closure of P in ℝ^d+2.A straightforward examination asserts that, whenever a set of parameters (A, z) is given, the output solution space 𝐘 is a Markov tree-shift since 𝐘 = 𝖷_ℱ, where ℱ = {-1, 1}^Δ_1∖ℬ(A, z). We consider the case where d = 2 as an example. Note that, whenever the parameters a_1 and a_2 are determined, (<ref>) and (<ref>) partition the a-z plane into 25 regions; the “order" (i.e., the relative position) of lines a - 1 +(-1)^ℓ z = (-1)^ℓ (a_1 y_w1 + a_2 y_w2), ℓ = 1, 2, can be uniquely determined by the following procedures: * The signs of a_1, a_2 (i.e., the parameters are positive or negative).* The magnitude of a_1, a_2 (i.e., |a_1| > |a_2| or |a_1| < |a_2|).This partitions a-z plane into 8 × 25 = 200 sub-regions. According to Theorem <ref>, the parameter space 𝒫^4 is partitioned into less than 200 equivalent sub-regions.§.§ Entropy bifurcation of neural networks on Cayley treesSuppose that, for each neuron of a neural network on Cayley tree, we substitute its output pattern 1 (resp. -1) with + (resp. -); then the output solution space 𝐘 of a CTNN is a Markov tree-shift over 𝒜 = {+, -}. We denote the TSFT 𝐘 by 𝐘_ℬ when we want to emphasize the basic set of admissible patterns ℬ. This subsection investigates the entropy and the entropy bifurcation diagram of 𝐘.We start with the following lemma, for which the proof can be done via straightforward elucidation, thus it is omitted. Suppose that 𝐘_ℬ is an output solution space such that u_ϵ = v_ϵ for all u, v ∈ℬ. Then h(𝐘_ℬ) = 0. Based on Lemma <ref>, we may assume that, for each basic set of admissible local patterns ℬ, there exist u, v ∈ℬ such that u_ϵ≠ v_ϵ. We call such a set of local patterns ℬ nontrivial; an output solution space 𝐘_ℬ is called nontrivial if its corresponding set of local patterns ℬ is nontrivial. Suppose that 𝐘 is an output solution spaces of (<ref>). Then h(𝐘) = 0 or ln d. Lemma <ref> suggests that we only need to consider nontrivial output solution spaces; that is, there exist u, v ∈ℬ such that u_ϵ = + and v_ϵ = -. Proposition <ref> demonstrates that h(𝐘) = ln d if both symbols + and - are essential. It remains to consider the case where exactly one symbol is inessential.Without loss of generality, we may assume that - is inessential. In other words, if u ∈ℬ satisfies u_ϵ = -, then u_i = - for 1 ≤ i ≤ d. Proposition <ref> shows that there exists v ∈ℬ such that v_ϵ = v_1 = ⋯ = v_d = +; Corollary <ref> indicates that h(𝐘_ℬ) = ln d. This completes the proof. The well-known entropy minimality problem investigates when the entropy of any proper subspace is strictly smaller than the entropy of the original shift space. For the case of CTNNs, the entropy minimality problem is equivalent to investigating under what condition h(𝐘_ℬ') < h(𝐘_ℬ), where ℬ' is obtained by deleting a pattern in ℬ. Furthermore, it follows from Theorem <ref> that the change of entropy is from ln d to 0; in other words, simply removing a pattern from the basic set of admissible local patterns ℬ makes significant influence to the original space. Equation (<ref>) characterizes those parameters which make such a tremendous influence.The discussion in the previous subsection shows that, once the parameters a_1, …, a_d are fixed, (<ref>) and (<ref>) partition the a-z plane into (2^d+1)^2 regions. We encode these regions by [p, q] for 0 ≤ p, q ≤ 2^d and denote the corresponding basic set of admissible local patterns as ℬ_[p, q]. More specifically, ℬ_[p, q] = ℬ_[p, q]; +⋃ℬ_[p, q]; - which satisfies |ℬ_[p, q]; +| = p and |ℬ_[p, q]; -| = q. For simplicity, we denote 𝐘_ℬ_[p, q] by 𝐘_[p, q]. The following proposition comes immediately. Suppose that the parameters a_1, …, a_d are given. Then 𝐘_[p, q]≅𝐘_[q, p] for 0 ≤ p, q ≤ 2^d. Since ℬ_[p, q]; + = - ℬ_[q, p]; - and ℬ_[p, q]; - = - ℬ_[q, p]; +, the desired results is then derived. Suppose that the parameters a_1, …, a_d are given. A pair of parameters (a, z) is called critical if, for each r > 0, there exists (a', z'), (a”, z”) ∈ B_r (a, z) such that h(𝐘_ℬ') = ln d and h(𝐘_ℬ”) = 0, where B_r (a, z) is the r-ball centered at (a, z) and ℬ' = ℬ(A', z'), ℬ” = ℬ(A”, z”), A' = (a', a_1, …, a_d), and A” = (a”, a_1, …, a_d). Suppose that the parameters a_1, …, a_d are given. Then h(𝐘_[p, q]) = 0 if and only ifmin{p, q} = 0or max{p, q} = 1,where 0 ≤ p, q ≤ 2^d. Furthermore, let ℓ be the index such that |a_ℓ| = min{|a_i|: 1 ≤ i ≤ d}. Then (a, z) is critical if and only ifa - 1 = ||z| - |a_ℓ|| - ∑_i ≠ℓ |a_i|.Observe that the proof of Theorem <ref> demonstrates that h(𝐘_[p, q]) = 0 if and only if min{p, q} = 0 or max{p, q} = 1. It remains to show that (a, z) is critical if and only if (a, z) satisfies (<ref>).Let C = {∑_i=1^d ℓ_i a_i: ℓ_i ∈{-1, 1} for alli}, and letK_1 = max C and K_2 = max C ∖{K_1}be the largest and the second largest elements in C, respectively. A careful but straightforward verification asserts that (a, z) is critical if and only ifa - 1 = | |z| -K_1 - K_22| - K_1 + K_22.(See Figure <ref> for more information.) The desired result follows from the fact thatK_1 = ∑_i=1^d |a_i| and K_2 = ∑_i ≠ℓ |a_i| - |a_ℓ|.As the end of this section, we give the following example to clarify the investigation of entropy bifurcation diagrams of neural networks on the binary Cayley tree. A neural network on the binary Cayley tree is represented asddt x_w(t) = - x_w(t) + z + a f(x_w(t)) + a_1 f(x_w1(t)) + a_2 f(x_w2(t)),where w ∈Σ^* and Σ = {1, 2}. The necessary and sufficient conditions for y_w = 1 and y_w = -1 area - 1 + z > - (a_1 y_w1 + a_2 y_w2) and a - 1 - z > a_1 y_w1 + a_2 y_w2,respectively. Suppose that a_1, a_2 satisfy 0 < -a_1 < a_2. It follows from a_1 - a_2 < -a_1 - a_2 < a_1 + a_2 < -a_1 + a_2 that, whenever a and z are fixed, the “ordered” basic set of admissible local patterns ℬ = ℬ_+ ⋃ℬ_- must obeyℬ_+ ⊆{(+, -, +), (+, +, +), (+, -, -), (+, +, -)}andℬ_- ⊆{(-, +, -), (-, -, -), (-, +, +), (-, -, +)}. If the parameters a and z locate in the region [3, 2](cf. Figure <ref>), then the basic set isℬ_[3, 2] = {(+, -, +), (+, +, +), (+, -, -), (-, +, -), (-, -, -)}.Theorem <ref> and Proposition <ref> conclude that h(𝐘_[3, 2]) = ln 2 and (a, z) is critical if and only if a - 1 = | |z| + a_1 | - a_2.§ CONCLUSION AND DISCUSSION In this paper, motivated by Gollo et al.'s works (cf. <cit.>), we study the dynamical behavior which tree structure neural networks are capable of. More specifically, we focus on equilibrium solutions known as mosaic solutions since they are related to the long-term memory of the brain and are applied in a wide range of disciplines. Entropy, a frequently used invariant, reveals the growth rate of the amount information stored in a (tree structure) neural network. Alternatively, positive entropy reflects that adding one more neuron stores exponential times of memory relative to the original system.A small modification of coupling weights resulting in different entropy means the neural network is sensitive or in some critical status. We elaborate the criticality of a neural network by whether or not the neural network is entropy minimal.After demonstrating the entropy spectrum of tree structure neural networks is discrete, we illustrate the necessary and sufficient condition for determining if a neural network is sensitive. Furthermore, the formula for coupling weights of critical neural networks is indicated.Since the activation function considered in this article is piecewise linear transformation f(s) = 12 (|s+1| - |s-1|), the output patterns of mosaic solutions are binary patterns. That is, the coloring set 𝒜 consists of only two symbols. It is of interest that what conclusion we can derive when 𝒜 consists of k symbols for some integer k ≥ 3. Furthermore, we focus on the rooted Cayley tree as the network's topology in the whole discussion; it is also of interest whether or not our results remain true for Bethe lattice. Related work is in preparation. amsplain
http://arxiv.org/abs/1706.09283v2
{ "authors": [ "Jung-Chao Ban", "Chih-Hung Chang", "Nai-Zhu Huang" ], "categories": [ "math.DS", "37A35, 37B10, 92B20" ], "primary_category": "math.DS", "published": "20170627030137", "title": "Entropy bifurcation of neural networks on Cayley trees" }
Towards Modern Inclusive Factories: A Methodology for the Development of Smart Adaptive Human-Machine Interfaces Valeria Villani1, Lorenzo Sabattini1, Julia N. Czerniak2,Alexander Mertens2, Birgit Vogel-Heuser3 and Cesare Fantuzzi1 1Department of Sciences and Methods for Engineering (DISMI) University of Modena and Reggio Emilia, Reggio Emilia, ItalyEmail: {valeria.villani, lorenzo.sabattini, cesare.fantuzzi}@unimore.it 2Institute of Industrial Engineering and Ergonomics RWTH Aachen University, Aachen, GermanyEmail: {j.czerniak, a.mertens}@iaw.rwth-aachen.de 3Institute of Automation and Information Systems,Technical University of Munich, Munich, GermanyEmail: [email protected] 30, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================empty empty Modern industrial automatic machines and robotic cells are equipped with highly complex human-machine interfaces (HMIs) that often prevent human operators from an effective use of the automatic systems. In particular, this applies to vulnerable users, such as those with low experience or education level, the elderly and the disabled. To tackle this issue, it becomes necessary to design user-oriented HMIs, which adapt to the capabilities and skills of users, thus compensating their limitations and taking full advantage of their knowledge. In this paper, we propose a methodological approach to the design of complex adaptive human-machine systems that might be inclusive of all users, in particular the vulnerable ones. The proposed approach takes into account both the technical requirements and the requirements for ethical, legal and social implications (ELSI) for the design of automatic systems. The technical requirements derive from a thorough analysis of three use cases taken from the European project INCLUSIVE. To achieve the ELSI requirements, the MEESTAR approach is combined with the specific legal issues for occupational systems and requirements of the target users.§ INTRODUCTION Advances in technology in modern industrial settings have led to the introduction of extremely complex automatic machines and robotic cells. Despite such a massive introduction of advanced technological solution, the role of human operators in this context is still focal, since they are responsible for controlling and supervising manufacturing activities and the desired flexible production. Nevertheless, this new technological scenario is not favorable to human operators themselves: indeed, the complexity of modern manufacturing plants is reflected in an increased complexity of the accompanying human-machine interfaces (HMIs), which allow the user to operate the machine, observe the system status and, if necessary, intervene in the process <cit.>. The increase in complexity of modern industrial HMIs can still be tackled by the most experienced human operators, who can interact efficiently with the machine only at the expenses of an unsustainably increased mental workload and stress. However, in the worst condition, vulnerable workers, such as those with low experience or education level, the elderly and the disabled, can barely sustain such an interaction in an effective manner.To tackle this issue, it is needed to make use of an anthropocentric approach that reverses the paradigm from the current belief that "the human learns how the machine works" to the future scenario in which "the machine adapts to the human capability" accommodating to her/his own time and features <cit.>. This is realized by adaptively simplifying the HMI based on the user's features and complementing her/his cognitive capabilities by advanced sensing and higher precision of machines. Following such approach, it would be possible to create an inclusive <cit.> and flexible working environment for any kind of operator, taking into account multiple cultural backgrounds, skills, age and different abilities.developing a methodology for the design of adaptive human-centered HMIs for industrial machines and robots.HMIs typically used for supervising industrial processes do not provide any possibility of controlling the amount of displayed information, or its form. Hence, while the human operator is flexible and adaptable, the system is not. In particular, the control systems applied to industrial processes typically respond in a specified way, without regard as to whether the flow of information is low or extremely high, or the level of expertise of the user is good or bad <cit.>.The human operator is then typically the only element that needs to adapt her/his behavior based on the situation. Namely, the operator needs to be sufficiently flexible, to be able to cope both with common activities and unpredictable situations, such as in the presence of dangers. This can cause significant difficulties for the operators, in particular considering the fact that the amount of monitored data that come from modern production processes is constantly increasing, and control systems are becoming increasingly complex <cit.>. To overcome this issue, the concept of context-dependent automation, also known as adaptive automation, has been introduced <cit.>. Generally speaking, context awareness is the ability for a system to sense, interpret, respond and act based on the context <cit.>. Based on this paradigm, the level of automation of a system is designed to be variable, depending on situational demands during operational use.Along similar lines, the idea of adaptive user interfaces has been developed, which consist in changing how the information is presented, in such a way that only the relevant pieces of information are provided to the operator, based on the context. Examples of adaptive user interfaces have been developed considering different application domains, such as automotive <cit.>, aeronautics <cit.> and smartphones and hand-held devices <cit.>. However, to the best of the authors' knowledge, only a few pioneering examples have been preliminary presented regarding HMIs for complex industrial systems <cit.>. Specifically, <cit.> described a preliminary concept of architecture for an HMI that adapts the presentation of information based on the operator responsiveness. Profiling of the operators is considered in <cit.>, and the HMI selectively presents information based on the profile of the current user. Going beyond this state of the art, the European project INCLUSIVE aims at developing a smart interaction system that adapts the information load of the HMI and the automation capability of the machine to the physical, sensorial and cognitive capabilities of workers <cit.>. In particular, the final goal is to provide technological solutions for compensating workers' limitations (e.g. due to age or inexperience), while taking full advantage of their knowledge. Three groups of operators are considered, namely elder, disabled, and inexperienced operators, since they are believed to be the most vulnerable ones in the interaction with complex automatic systems, as discussed in Sec. <ref>.Three main pillars constitute the INCLUSIVE system <cit.>. The first pillar relates to the measurement of human capabilities: the system will measure the human capability of understanding the logical organization of information and the cognitive burden the operator can sustain (automatic human profiling). The second pillar consists in the adaptation of interfaces to human capabilities: the system will adapt the organization of the information, the means of interaction, and the automation task that are accessible by the user depending on her/his measured capabilities. Finally, the third pillar is about teaching and training for unskilled users: the system will be able to teach the correct way to interact with the machine to the unskilled users, exploiting also simulation in virtual and augmented environment.In this paper, we propose a set of methodological recommendations for the design of an adaptive human-machine system that is inclusive for all users. In particular, we derive the technical requirements that a complex human-machine system, such as the one considered in INCLUSIVE, should fulfill in order to allow also vulnerable users to access it. Such requirements are defined starting from the analysis of the industrial use cases of INCLUSIVE, but have general validity. In particular, the main issues related to state of the art solutions in terms of HMI are highlighted, referring explicitly to representative target scenarios. From the analysis of the use cases, a set of users' needs is defined. Specifically, users' needs describe the technical issues and difficulties that operators typically encounter with the currently available technological solutions. Users' needs are then abstracted, to define the technical system requirements. These are general technical methodological guidelines that should be considered in the design of any complex human-machine system, in order to make it accessible also to vulnerable users.Moreover, we carry out an analysis of the different ethical, social and legal implications (ELSI) of such a system to protect the user against harm and disadvantages. Based on the MEESTAR approach <cit.>, which is an instrument for identifying ethical problems, we develop an ELSI concept and test its appropriateness in a possible operative scenario. Then, we derive some design recommendations in terms of ELSI requirements for the development of smart interaction systems for automated production machines. The aim is offering fair requirements, independent of individual skills and capabilities. § DESCRIPTION OF THE CONSIDERED USE CASESTo derive methodological considerations that have general validity it is important to start from real use cases that depict the scenario of human-machine systems currently utilized in industrial environments. To this end, we consider, as a case study, the industrial use cases addressed in the INCLUSIVE project, since they are representative of a wide area of interest for industry in Europe: * machinery for small companies, typically run by elderly owners; * automation solutions made for developing countries; * industrial plants made by a big company. Specifically, the first use case refers to machinery used for woodworking in artisans' shops. The second one considers a robotic solution to be applied in a company located in a developing country, where operations are mostly performed manually. In particular, the considered robotic solution is for panel bending. Finally, the third use case refers to a bottling company and, in particular, a labelling unit is considered.Such use cases have been chosen since they address different categories of most vulnerable users, namely elderly, disabled and low experienced. Specifically, by elderly we consider those people in the last years of their work life. Generally, these workers have a large experience in the traditional industrial processes, but are not familiar with modern computerized devices and, then, have difficulties in utilizing modern automatic machines that come with complex HMIs. As regards people with physical impairment and limited cognitive abilities, such limitations introduce as well difficulties in the use of complex automatic machines. Finally, by inexperienced we refer to people with low level of education, limited expertise in the use of automatic machines and/or computerized HMI, and lack of experience in industrial processes.For each use case, a specific working scenario is analyzed in order to derive what are the concrete limitations of currently implemented solutions. These activities were selected by the corresponding industrial partners of the INCLUSIVE consortium, since they require unavoidable interaction of the user with the machine and are representative of the most frequent operations with automatic machines. Specifically, for the first use case we focus on the activities related to tuning of the machine, to make it ready for woodworking (tuning of the tools warehouse, tuning of the worktable area components) and routine maintenance procedures. For the second use case, we consider the standard activities performed by a user for bending a part, and replacing malfunctioning tools. The working scenario for the third use case refers to the fault recovery procedure, performed in jog mode, for misalignment of the neck ring label of bottles and the changeover of the printing format, required at the beginning of a working day or when a new bottle or label is produced on the line. § ANALYSIS OF THE PROBLEMS OF CURRENT HMISFor each of the working scenarios, we analyzed how interaction is currently carried out, aiming at finding pitfalls which should be corrected in an inclusive system.§.§ Use case 1The first limitation in the current implementation of the human-machine interaction lies in the fact that there is a clear lack of guided procedures assisting the user. In fact, the user is currently barely supported by the interface: only simple alarms are displayed, which describe what the current problem is, but not how to solve it. Moreover, as regards the setup of the tools change, there is a misalignment between the equipment in the physical store (i.e., the tools on board the machine) and that in the virtual one shown in the HMI (i.e., the tools that the HMI displays as on board the machine), since the virtual store does not update automatically when a change in the physical one is made. As a consequence, currently the operator must pay attention to avoid mistakes that could jeopardize the operation of the machine: clearly, this activity is time-consuming and prone to errors.This consideration applies also to the setup of the working area. Indeed, currently the interface supports the operator only by displaying, in a picture, the position of the components. It is up to the operator to manually move the various components in the correct position. This lack of intuitiveness and assistance results in an additional decrease of efficiency and raises problems related to the constant need to consult the operator's manual, thus stopping normal operations to solve routine issues. However, since the manual is typically not stored close to the machine and is not organized with a clear focus on troubleshooting, it is rarely used by the operators, who end up to directly contact the assistance service to solve routine issues. In some other cases, they perform some tasks following some unofficial shortcuts rather than the official procedures recommended in the manual. Moreover, given the lack of guidance, often errors of inexperienced operators severely compromise the operation of the machine. §.§ Use case 2With respect to the second use case, the main problem of the current HMI lies in the fact that these robotic cells can be used only by highly skilled personnel. In particular, background education in mechanical or electrical fields is necessary, since operators need to have significant coding skills both to program the system, and to be able to recover from problems that could arise during normal operations, also for simple cases, such as photocell malfunction. The use of the system by unskilled operators usually causes several problems since they often choose the wrong tool to perform the bending operations, or the wrong material thickness, thus making bending not possible, or wrong settings in the definition of the air pressure, that thus leads to incorrect bending operations. Moreover, current HMIs are based on touch screens and standard computers, and they cannot be utilized by people with disabilities of the upper limbs, or by blind people, effectively. Further, as in the previous use case, no guided procedure is available, besides the manual: hence, only operators with a long experience are able to solve problems. Although several choices need to be made for setting up the system (e.g. the correct angle to be used for bending a certain part), commonly adopted solutions exist, but they are known only by expert operators. Also, the operator needs to decide what parameters need to be changed, and then see what the result will be: again, this operation is mainly based on the operator's experience. §.§ Use case 3As regards the last use case, one of the biggest issues is related, also in this case, to the fact that the use of the system by untrained users is impractical. Operators need a specific training phase, before being able to interact with the machine. In particular, during the first uses, operators perceive the interaction with the system as uncomfortable. In these conditions, it was reported that operators feel afraid of damaging the system, the machine or the product, especially if a trainer or a supervisor is close by: indeed, although these people are trying to help or prevent disasters, the employee is stressed by this situation even more. Moreover, another source of stress is the fact that operators do not receive any feedback or acknowledgement of performed activities, to help them to understand if they are doing well.Also in this case, inexperienced operators need the manual to check for every possible fault cause and how to correct them. Despite of this, it still happens that often wrong operations are performed, or operations are not correctly performed according to the manual, and, in particular, often the wrong operational mode is selected, e.g. semiautomatic or manual instead of jog mode. All these issues appear, in particular, for operators that are new to machinery or for low educated people.§ DEFINITION OF USERS' NEEDS The users' needs have been identified from the above analysis of the problems of current interaction systems. The first category of users' needs refers to the inclusion of all users in complex human-machine systems. The system should be effectively usable by inexperienced operators, by operators with different age, level of work experience, namely novice users and expert operators, and education, and those with physical impairment. Specifically, the presence of an easily accessible guidance, which might exploit augmented reality for step-by-step guided procedures, could be a substantial advantage for unskilled operators, in order to make problem solving tasks accessible also to them. In this regard, programming by code writing, which is currently required in the scenario of the second use case, should not be necessary. As regards physical impairments, different disabilities might be typical, depending on the application scenario: as an example, in the case of woodworking machines, missing fingers have been reported as a typical impairment. Thus, a second group of users' needs rises: the organization of information should be user-oriented. This implies that, on the one side, procedures should adapt to the operator's skills, thus being sufficiently clear for unskilled operators and not too long-winded for the skilled operators. On the other side, the system should guide the operator during ordinary operations, such as setup or maintenance. A teaching module could be implemented, to suggest unskilled operators common practice solutions. As a consequence, specific prior training and studying the manual should not be necessary. Despite of this, it should be possible to perform operations in the correct sequence, according to the manual, by means of proper suggestions suitably provided by the HMI. This should be possible also for tunable procedures, where the system should suggest the operator what parameters need to be changed, based on the desired result. A solution for unskilled operators could be to provide suggestions on what parameters need to be changed, knowing how they influence the achieved result.These users' needs lead to the consideration that human factors must be prioritized. Indeed, the system should be perceived as comfortable for all the users and the stress level during the use of the system should be low. In order for this to be achieved, the intervention of supervisors for assisting the operators should be avoided and operators should feel confident when using the system alone.As a consequence of such an anthropocentric approach, the operator's performance should be automatically enhanced, in the sense that the operators should be enabled to perform the correct actions and choices. The number of errors should be reduced, while the execution time should be improved. Specifically, the correct operational mode and the correct value for critical parameters should be automatically selected. Also, the choice of wrong options should be prevented and the HMI should depict the actual current equipment and state of the machine.Finally, some advanced technological solutions should be implemented to allow a smoother interaction with the machine. Specifically, hands-free interaction, such as speech recognition and synthesis, should be possible to enable the operators to interact with the machine when wearing gloves or protection equipment. Additionally, portable interfaces, such as wearable devices and augmented reality, should be available, to guide the operators in the working area.§ TECHNICAL REQUIREMENTSBased on the description of the use cases and of the identified user issues, the following system requirements are derived. They describe how an adaptive human-machine system should be implemented in order to be inclusive for all users, and in particular elderly, disabled and low experienced users: * The interface adapts to the level of skills of the operator. * The system can be used by low educated operators. * The system can be used by physically and cognitively impaired operators. * The system can be used by people with low computer skills. * The system enforces the correct procedures. * The operator feels satisfied from the interaction experience. * Interaction with the system generates a low level of stress for the operators. § ETHICAL, SOCIAL AND LEGAL ASPECTSThe introduction of a system that processes sensitive personal data to disclose barriers of human capabilities, requires that also ethical, legal and socialrequirements have to be taken into account to protect the user against harm and disadvantages. However, evaluating ethical, social and legal implications (ELSI) represents a specific challenge.In this paper we propose to deal with ELSI aspects by a diverse approach, namely the MEESTAR model, which originally was developed for evaluating socio-technical arrangements in the field of age appropriate assisting systems <cit.>.It is an analytical instrument which guides the process of reflecting on the use of technology. The model aims at identifying ethically problematic effects in a structured way and, on that basis, develop appropriate solutions.The model focuses on negative effects, requiring that the system causes little or no harm to the user. The first step of the MEESTAR analysis is to identify relevant ethical dimensions for the particular scenario. Thus, the aim of this approach is to find a basis for ethical, social and legal aspects, according to the intention of implementing sensors for measuring human capabilities and tracking individual health data. Furthermore, legal requirements given by the European Union (EU) are considered[In this paper we consider only EU legislation.], and finally responsibility for needs of vulnerable target users is taken into account.Working with MEESTAR involves the systematic consideration of three axes, as shown in Fig. <ref>. The x-axis consists of seven ethical dimensions: care, autonomy, safety, justice, privacy, participation and self-conception. The y-axis describes stages of ethical evaluation, allocating problems among four levels of ethical sensitivity. The z-axis provides three points of view (individual, organizational, social). The legal issues regard mainly data protection, safety and health at work, and product requirements. The main directives in the context of production machines are the Machinery Directive 2006/42/EC about construction of safety-related products, and the Council Directive 89/391/EEC on the introduction of measures to encourage improvements in the safety and health of workers at work. The MEESTAR dimensions show several intersections with legal requirements and target users, as shown in Fig. <ref>: * caring for users with different limitations in skills and capabilities,* giving these users possibility for an autonomous interaction with automated production systems,* fulfilling standards for safety and justice, by addressing employers corporate duties by law,* sensitively approaching the employees right to privacy according to legal requirements, by treating personal data with dignity and respect. Thus, the following technical aspects that need to be taken into account can be derived: i) occupational health, ii) occupational safety, iii) data protection, iv) ergonomic workplace design, v) equal opportunities and vi) reintegration. Specifically, occupational safety and health is an interdisciplinary field, concerning safety and health of a working person in an occupational system to prevent him/her from working hazards <cit.> in accordance with the MEESTAR dimensions safety and justice. Also ergonomic workplace design, as a subtask of occupational safety and health promotion, belongs to this category. Under EU law, data can only be processed under strict conditions, because everybody has a right to the protection of personal data <cit.>, which corresponds to MEESTAR dimension privacy. In the perspective of target users, who have special characteristics and therefore differences in perception, cognition and motor skills, an equal treatment and integration into working processes is required. Thus, care about their capabilities and autonomous use of automated machines are the main topics in this case.§ ASSESSMENT OF THE ELSI CONCEPT To assess the discussed dimensions of the ELSI concept, a questionnaire was developed to investigate the appropriateness of the identified dimensions in the considered scenario, namely that of an inclusive complex human-machine system accessible to special user groups, with special needs and requirements.To make the participation in the questionnaire more effective, we considered a specific working context where affective computing is applied to an industrial human-machine system, thus measuring operator's mental workload, stress and induced anxiety by recording some physiological signals. Specifically, the questionnaire included questions regarding the following scenario: ”The working machines are equipped with sensors that are able to track strain of a working person by real-time measurement of his/her physiological parameters, e.g. heart rate, blood pressure, etc. If the measured strain indicators are too high, the human-machine-interface adapts to the situation resulting in a lower stress level.”The questionnaire was distributed to all members of the INCLUSIVE consortium, to consider all relevant stakeholders that are affected. Seven partners participated in the study and participants were employed at companies in the following sectors: IT, technology transfer, industrial automation, white goods, packaging and bottling. Each participant in the study was asked whether a potential of improvement/risks in measuring strain of a working person is measured according to each of the dimensions of the ELSI concept, namely occupational health, occupational safety, data protection, ergonomic workplace design, equal opportunities and reintegration. Table <ref> lists in detail all the potential of improvement and risks mentioned by the participants in the study. In particular, when designing the HMI, it has to be taken into account that the complexity resulting from the adaptive HMI behavior prevents inducing strain itself. In addition, the system must implement effective anonymization of personal user data; otherwise, there would be the risk that performance assessment, for instance, leads to a termination of employment. Moreover, the system should ensure that nobody is discriminated. According to respondents' answers, the supporting system should also ensure that the users have to respect safety regulations. Here, the system should meet relevant safety criteria and, if false impression of security occurs, call her/his attention. The measuring system should also take into account that the user is not distracted while working and that there is not a risk of stumbling. According to doubts of participants, the system should in no case cause injury to health by means of inductive measuring technology.§.§ ELSI requirementsThe findings reported in Table <ref> allow us to derive the ELSI requirements, which have general validity and thus apply to any user-centred human-machine system that relies on affective computing for including vulnerable users. Specifically, the derived design recommendations for ethical, social and legal aspects are the following: * The system prevents inducing strain itself.* The system considers anonymized personal data.* The system uses collected data not for any disadvantage for the employer.* The system depicts relevant user requirements and prevents discrimination.* The system meets all relevant safety criteria.* The system does not distract the operator.* The system does not cause injuries by means of inductive measuring technology. § CONCLUSION In this paper we presented a methodological approach to the design of complex human-machine systems that adapt to the operator’s skills and capabilities, complementing their limitations, while taking full advantage of their knowledge. Specifically, the proposed approach aims at guiding in the design of HMIs that can be effectively used by vulnerable operators, such as those with low experience or education level, the elderly and the disabled. To this end, we defined a set of technical requirements and requirements related to ethical, legal and social implications. The technical requirements were derived from the analysis of the industrial use cases considered in the European project INCLUSIVE and they abstract what should be fulfilled in order to allow also vulnerable users to access a complex automatic machine or robotic cell were derived. As regards the ethical, legal and social requirements, they were derived combining the MEESTAR approachwith the specific legal issues for occupational systems and requirements of the target users. The validity of the such requirements was then validated in the context of the INCLUSIVE project. § ACKNOWLEDGEMENTThe research is carried out within the ”Smart and adaptive interfaces for INCLUSIVE work environment” project, funded by the European Union's Horizon 2020 Research and Innovation Programme under grant agreement N°723373. The authors would like to thank the industrial partners responsible for the use cases for providing a description of the use cases and selected working scenarios.IEEEtran
http://arxiv.org/abs/1706.08461v1
{ "authors": [ "Lorenzo Sabattini", "Valeria Villani", "Julia N. Czerniak", "Alexander Mertens", "Cesare Fantuzzi" ], "categories": [ "cs.HC" ], "primary_category": "cs.HC", "published": "20170626163020", "title": "Methodological Approach for the Design of a Complex Inclusive Human-Machine System" }
Beamforming and Scheduling for mmWave Downlink Sparse Virtual Channels With Non-Orthogonal and Orthogonal Multiple Access Alessandro Brighente and Stefano Tomasin This work has been supported in part by Huawei Technologies, Milan, Italy. Department of Information Engineering, University of Padova via Gradenigo 6/B, 35131 Padova, Italy. Email: [email protected], [email protected]================================================================================================================================================================================================================================================================================================================================================== We consider the problem of scheduling and power allocation for the downlink of a 5G cellular system operating in the millimeter wave (mmWave) band and serving two sets of users: FR users typically seen in device-to-device (D2D) communications, and VR users, or high data rate services. The scheduling objective is the weighted sum-rate of both FR and VR users, and the constraints ensure that active FR users get the required rate. The weights of the objective function provide a trade-off between the number of served FR users and the resources allocated to VR users. For mmWave channels the virtual channel matrix obtained by applying fixed discrete-Fourier transform (DFT) beamformers at both the transmitter and the receiver is sparse. This results into a sparsity of the resulting multiple access channel, which is exploited to simplify scheduling, first establishing an interference graph among users and then grouping users according to their orthogonality. The original scheduling problem is solved using a graph-coloring algorithm on the interference graph in order to select sub-sets of orthogonal VR users. Two options are considered for FR users: either they are chosen orthogonal to VR users or non-orthogonal. A waterfilling algorithm is then used to allocate power to the FR users.5G; Beamforming; mmWave; Non-orthogonal Multiple Access; Scheduling.§ INTRODUCTIONMmWave transmission systems operating with multiple users will pose many design challenges, related to the need of precise beamforming to overcome the strong attenuation at GHz frequencies <cit.>. On the other hand, the use of mmWave has been advocated for fifth-generation (5G) systems in order to satisfy on ten-fold achievable data rate with respect to the previous mobile communication generation: in this scenario the use of extremely high-frequency bands becomes even more problematic due to the presence of multiple users and the need to design suitable multiple access schemes. From a paradigm based on orthogonality among users to maximize the sum rate – mainly pursued in 4G systems – the 5G trend is now shifting back to non-orthogonal multiple access (NOMA) schemes, better suited to heterogeneous devices envisioned in the 5G network, e.g., sensors and actuators of the Internet of Things (IoT) <cit.>. All these problems are further complicated by the use of a huge number of antennas, the massive MIMO regime, again proposed for 5G systems in order to propel data rate and cell user density. In this context, a radical review of both beam-forming and scheduling is needed, taking into account also the need of keeping both the signaling exchange on air at the minimum (for a higher data-rate) and the signal processing complexity under control, especially on the terminal side. Focusing on a downlink scenario where a base station (BS) equipped with many antennas aims at serving multiple mobile terminals (MTs) on the mmWave band, many solutions are available in the literature for beam-forming and scheduling. For example, orthogonal user transmission can be considered in order to maximize the rate, thus (regularized) zero-forcing beamforming can be used at the BS, together with water-filling power allocation and scheduling <cit.>. Similar approaches include diagonalization and block-diagonalization of the channel <cit.>. Indeed, it has been shown that in a massive MIMO regime, eigenbeamforming and matched filter precoders are optimal, and the transmit power can be made arbitrarily small <cit.>. However, these results leverage orthogonality among users and have as target the maximization of the sum rate. These assumptions must be revised in 5G systems used also for device-to-device (D2D) communication in the IoT context, typically having FR requirements: for these services the target becomes serving the largest number of users, providing them the required fixed data rate. In the NOMA context, two approaches have been proposed in the literature: superposition coding and successive interference cancellation. With the first approach users suffer from the interference but can still decode the intended signal as long as the signal to noise plus interference ratio (SNIR) is above a given threshold; with the latter approach the interfering signal is decoded and then canceled before the useful signal is decoded. For superposition coding a large literature is available, for the maximization of the weighted sum-rate and energy-efficient solutions (see <cit.>for surveys). In <cit.> beamforming and power allocation have been designed using as metrics the sum rate and outage probability of MIMO-NOMA systems. In <cit.> optimal beamforming and power allocation solutions have been derived for the weighted sum-rate maximization under a total power constraint and a max-min balancing of user rate under a total power constraint: in this work however no FR user is present. A simple approach has been proposed in <cit.>, where FR and VR users are allocated orthogonal resources, then using further orthogonal resources for the VR users and non-orthogonal resources for the FR users. In <cit.> an scheduling considering also the pilot overhead needed for channel estimation, adapted to the user channels characteristics has been considered.In this report we focus on the problem of scheduling and power allocation for the mixed user characteristics of <cit.>: first, we formulate the problem of maximizing the weighted sum-rate of both FR and VR users, with constraints ensuring that active FR users get the required rate. This turns out to be a mixed integer programming problem, where the integer variables dictate which FR users are active, and continuous variables provide the power allocation for all users. The weights of the objective function provide a trade-off between the number of served FR users and the resources allocated to variable users. We then exploit the peculiarity of mmWave channels which can be suitably represented as a sparse matrix in a dual domain <cit.>. The sparse channel is obtained by applying fixed discrete Fourier transform (DFT) beamformers at both the transmitter and the receiver, thus providing a low-complexity implementation. On the other hand, the sparsity of the resulting multiple access channel is exploited to simplify scheduling, first establishing an interference graph among users and then grouping users according to their orthogonality. In particular, the original scheduling problem is solved using a graph-coloring algorithm on the interference graph in order to select sub-sets of orthogonal VR users. Two options are considered for FR users: either they are chosen orthogonal to VR users or they are allowed to interfere with the other set. A waterfilling algorithm is then used to allocate power to the FR users, while the power needed to achieve the required rate is allocated to FR users. The rest of the report is organized as follows. Firstly we derive the system model for the described scenario and give a mathematical description of the power allocation and scheduling problem. Secondly we propose three solutions: the direct power allocation and scheduling, a solution based on GC, where both VR and FR users are clustered in different sets in order to eliminate interference, and a PGC solution where only VR users are clustered to eliminate interference. Lastly we analyse the performance of the proposed solutions in terms of achievable sum-rate when the optimization target moves from VR users to FR users and show how the number of antennas at the BS impacts on the proposed solutions. § SYSTEM MODEL We consider a massive MIMO downlink transmission from a BS with a large number N_a of antennas organized in a uniform planar array (UPA) to a set of N_u single-antenna users, with N_u << N_a. Assuming an AWGN flat-fading channel model among each antenna couple, the resulting N_u × N_a channel matrix is denoted as H. For a transmission in the mmWave band, each BS-user link is described by L paths, with L typically being small. Moreover, considering the UPA configuration at the BS and the single antenna MT, as derived in <cit.> the row channel vector h_k of user k can be transformed into a virtual channel vector h_ v,k by the transformationh_k = h_ v,kU_t^* ,where U_t is the N_a × N_a unitary DFT matrix. By stacking the rows h_ v, k into the matrix H_v we can write thechannel matrix asH= H_ vU_t^* . When the number of antennas at the BS tends to infinity, the virtual channel matrix H_v becomes sparse, asymptotically having L · N_u non-zero entries associated to the L paths of each one of the N_u users. Also, when angles of departures are aligned with DFT direction <cit.> again H_v becomes sparse with L ·N_u non-zero complex entries, with probability density functionp_[H_v]_j,k(a) = (1-η)δ(a)+η/πσ^2_Lexp(-|a|^2/σ_L^2) ,where η=L/ N_a is the probability that a virtual channel matrix entry is different from zero, δ(·) is the delta function and σ_L^2 isthe average channel gain.Received signal is affected by complex AWGN, independent at each user,with zero mean and variance σ^2.We consider two different types of users: * K VR users having as target the maximization of their ASR;* N FR users aiming at being active, i.e., transmitting with a fixed data rate R_ fix, corresponding to fixed SINR γ_ fix with R_ fix = log_2 (1+γ_ fix) . The total number of users is K+N = N_u. A FR user assumes only two possible states: transmission with a FR imposed by the SINR requirement or turn off. In particular, the first K users are the VR users, while next N users are the FR users.We denote by p_j, j=1, …, N_u the power assigned to user j. We also collect into the N_u column vector p all user powers. We consider a total power constraint, i.e.,p_k ≥ 0, ∑_k=1^N_up_k ≤ 1 . According to the beamformer adopted by the BS, as described in the following section, the resulting SINR for user k=1, …, N_u is denoted as γ_k(p), depending on the power allocation. For the beamforming and scheduling problem we aim at maximizing the weighted ASRℛ(p) = ∑_k=1^K w_klog_2(1+γ_k(p))+ ∑_k=K+1^K+Nρ_k x_k R_ fix ,where ρ is the weight of FR users ASR, w_k is the weight of the ASR of VR user k and x_k is a binary variable which assumes value 1 if the considered FR user is active and 0 otherwise.§ BEAMFORMING AND SCHEDULING Considering the properties of the virtual channel, and that it can be obtained by the inverse DFT matrix applied at the BS, we consider as BS transmit beamformer matrix U_t, therefore the equivalent channel seen by the users is the sparse virtual channel H_v having N_a virtual transmit antennas and N_u virtual receive antennas. We will use one transmit virtual antenna per user, and in particular for user k we select in h_ v,k the entry with largest gain, i.e.,ℓ_k =argmax_i |[h_ v,k]_i|^2 ,and we indicate with G_k,k = |[h_ v,k]_ℓ_k|^2the corresponding gain. With this choice, user k will suffer from the the interference of all other user signals, based on his virtual channel h_ v,k, i.e. the gain of the interference channel from user j to user k is G_j,k = |[h_ v,k]_ℓ_j|^2 ,corresponding to the use of virtual antenna ℓ_k by user j through the channel of user k. Therefore N_u × N_u matrix G having entries G_j,k defines the useful gain and interference gain for all users. With this choice, user k experiences a SINR γ_k(p) = G_k,kp_k/∑_j ≠ kG_j,kp_j+σ^2 . Let us consider the weighted ASR maximization problem with variables p_k and x_k. Our aim is to select a subset of FR users and to allocate power to both VR users and active FR users in order to maximize the weighted ASR. Let x = [x_1, …, x_K+N]. The problem can be modeled as p, xmax∑_k=1^K w_k log_2 (1+G_k,k p_k/∑_j=1, j ≠ k^K+NG_j,k p_j x_j +σ^2 )++∑_k=K+1^K+Nρ_k R_ fixx_ks.t. (<ref>) andG_k,k p_k/∑_j=1, j ≠ k^K+NG_j,k p_j x_j +σ^2≥γ_ fix k = K+1,...,K+N x_k = 1k = 1,...,K x_k ∈{0,1} k = K+1,...,K+N This problem of joint power allocation and scheduling is modelled as a mixed-integer optimization problem, which belongs to the NP-hard class of problems. We will hence first analyse thesolution and then discuss two different methods that can reduce the computational complexity of thesolution while ensuring the same weighted ASR value.§.§ Direct solution In order to solve the mixed-integer programming problem we divide it into two problems: the problem of finding the set of active users (thus solving on x) and the problem of allocating the power (thus solving on p). In particular we exhaustively consider all possible sets of active FR users, and for each set we solve the power allocation problem (if feasible) maximizing the weighted ASR. For each set of active FR users x the resulting power allocationproblem is hencepmax∑_k=1^K w_k log_2 (1+G_k,k p_k/∑_j=1, j ≠ k^K+NG_j,k p_j x_j +σ^2 )++∑_k=K+1^K+Nρ_k R_ fixx_ks.t. (<ref>), (<ref>) and (<ref>).We notice that the resulting optimization problem is non-convex due to the argument of the the log function of (<ref>) and we approximatethe target function to make it convex. In particular, we consider a high SINR regime and perform the change of variablesq_k = log_2 p_k, obtaining the convex maximization problempmax∑_k=1^K w_k log_2 (G_k,k e^q_k/∑_j=1, j≠ k^K+NG_j,k e^q_j x_j +σ^2 )+∑_k=K+1^K+Nρ_k R_ fixx_k s.t. e^q_k≥ 0, ∑_k=1^K+Ne^q_k≤ 1 G_k,k e^q_k/∑_j=1, j ≠ k^K+NG_j,k e^q_j x_j +σ^2≥γ_ fix k = K+1,...,K+N.The resulting maximization problem is hence convex and can be solved with standard optimization solutions. We then pick the set of active users for which the problem is feasible and provides that maximum weighted ASR.§.§ GC solution The second solution aims at reducing the computational complexity of the optimization problem by clustering the users in different groups, in which users transmit without interference. Then we transmit only to one set of non-interfering users, allocating powers in order to maximize theweighted ASR. The set of active users is the one that maximizes the weighted ASR among all sets.In order to perform the clustering operation we use theinterference matrix G. We notice that, since the mmWave virtual channel results sparse, the resulting interference matrix computed over the matrix of the gains of the virtual channel G will have mostly small or zero off-diagonal entries. In particular, as the ratio N_a/N_u of transmit and receive antennas tends to infinity,the interference matrix itself will become sparse.Indeed, in sparse massive-MIMO channels the channel vectors associated to different users become orthogonal as the number of antennas goes to infinity <cit.>. Our system behaves in a similar manner. Considering the virtual channel model with L non-zero paths and a growing number of antennas at the BS, we have that, asymptotically, users do not interfere. Therefore the interference matrix becomes a diagonal matrix and the graph colouring leads to all users labelled with the same colour. From the interference matrix we build the interference graph, i.e. a directed graph composed by a node for each user and with edges connecting interfering users: edges are directed from the interfering user toward the user suffering the interference. No edge between nodes k and n is present if and only if G_k,n = 0. Nodes are then clustered into groups of non-interfering users: this corresponds to color the graph, where each color represents a group and nodes with the same colors must not be connected by edges. We aim at finding the minimum number of colors needed for the graph, and we resort to the solution proposed in <cit.>. Fig. <ref> shows an example of interference graph obtained for a particular realization of the interference matrix, where, after graph colouring, each node (corresponding to user U_k)has been assigned a colour red or green.Since users having the same color do not interfere, assuming to transmit only to users of the same color the SINR of user k can be written asγ_k = G_k,kp_k/σ^2 ,which coincides with the SNR.Let 𝒞 the set of colors,ℱ_c be the set of indices of FR users with color c ∈𝒞, and let 𝒱_c the set of VR users with color c. To select the subset of active FR users (with the same color) we resort to theexhaustive search approach, thus computing the weighted ASR corresponding to each active set. For the FR users the allocated power is p_k = γ_ fixσ^2/G_k,kx_k , k ∈ℱ_cFor VR users since no interference is present, the optimal power allocation strategy is provided by the waterfilling algorithm{p_k, k ∈𝒱_c}max∑_k∈𝒱_c w_k log_2 (1+G_k,k p_k/σ^2) s.t. p_k ≥ 0, ∑_k∈𝒱_cp_k ≤ 1- ∑_k ∈ℱ_c x_k p_k . The graph-coloring solution is summarized inAlgorithm 1. §.§ PGC solution The third solution applies grouping (i.e., GC) only on VR users, while the FR users are allowed to interfere with the VR users and among themselves. This allows a more flexible allocation of FRusers, while at the same time due to their low-rate request we expect that the resulting interference on the VR users is reduced. Therefore, in this case we apply the GC only on the interference graph of the VR users, while we must perform the exhaustive search of the the active FR users among all FR users. Therefore, for each color c ∈𝒞 we must solve the optimal scheduling and power allocation problem for the set of VR users 𝒱_c and for all FR users. With respect to the global direct approach we have reduced the set of VR users, but on the other hand we must run the optimization algorithm for each color. § NUMERICAL RESULTS Wepresent nownumerical results obtained with the different solutions described in previous sections. We consider N_u=7 users where K=4 are VR users and N=3 are FR users. We assumethat the virtual channel matrix H_v is composed by L=100 non-zero entries and that they are distributed as in (<ref>) and the SINR requirement for FR users is 5 dB while AWGN leads to an averageSNR of 17 dB. We start verifying the assumption that the interference matrix is sparse, due to the large number of BS antennas. Let us considermatrix D, defined as the matrix containing off-diagonal elements of G, i.e., D = G -diag(G). Fig. <ref> shows the average number of non-zero entries of D vs the number of antennas at the BS. We observe that the average number of non-zero entries sharply decreases as the number of antennas increases, and at N_a =200 we have on average about 3 non-zero off-diagonal entries. In the following we consider N_a =200.We now consider the scheduling and power allocation algorithms described in the paper. The weight w_k has been chosen to be equal to 1-ρ for each VR user, i.e. w_k = 1-ρ∀ k={1,...,K} and ρ∈{0,1}. With this choice we do not distinguish among the users inthe two sets, but only use ρ to balance the resources uses for VR and FR users. In particular, when ρ≈ 0, we maximize the ASR of VR users, whereas for ρ≈ 1 we maximize the ASR of FR users. Therefore we consider as performance metric the average ASR for both VR and FR users. Fig. <ref> shows the average (over various channel realizations) ASR vs. ρ, for the different scheduling and power allocation solutions.The direct algorithm is denoted as direct, and then we have the GC and PGC algorithms. As we expect, with increasing values of ρ, VR users ASR decreases while FR users ASR increases. Note also that even if ρ = 1 we still have a non-zero ASR for VR users, since all FR are served and the remaining power is allocated to VR users. We notice that the direct solution to power allocation and scheduling is well approximated by both GC and PGC solutions for VR users, whereas, for FR users, the PGC solution is the best performing. Worse ASR performance for FR users in GC solution are due to the fact that VR users could be labelled with different colours and hence can not transmit together. Notice that the GC solution for VR users outperforms the direct solution. This is due to the fact that even if the maximization problem is focused on FR users ASR, they could be labelled with different colours and hence in GC solution they can not transmit all together, resulting in a power allocation for VR users with higher values than direct solution. This results in higher ASR values for VR users when ρ tends to 1. § CONCLUSIONSIn this work we considered a mmWave massive MIMO system and we solved the problem of power allocation and users scheduling when two different sets of users transmit: VR users and FR users with a required SINR level for transmission. We analysed the direct solution and proposed two algorithms, GC and PGC, that present similar ASR performance for both VR users and FR users while reducing the computational complexity. Then we analysed the performance of all the proposed solutions in terms of ASR when the weight ρ (and hence w_k) assigned to VR users changes and showed that the proposed algorithms present the same performance obtained with the direct solution. We also showed that with an increasing number of antennas at the BS the interference matrix becomes diagonal and hence that the low-complexity GC solution achieves optimal results. 99 Sayeed-02 A. M. Sayeed, "Deconstructing multiantenna fading channels," IEEE Transactions on Signal Processing, vol. 50, no. 10, pp. 2563-2579, Oct 2002. Evans J. Papandriopoulos and J. S. Evans, "Low-Complexity Distributed Algorithms for Spectrum Balancing in Multi-User DSL Networks", in Proc. IEEE International Conference on Communications, vol.7, pp. 3270-3275, June 2006.Segundo P. S. Segundo "A new DSATUR-based algorithm for exact vertex coloring", Computers and Operations Reasearch, no.39, pp. 1724-1733, 2012. Schniter J. Mo, P. Schniter, N. G. Prelcic and R. W. Heath Jr. "Channel Estimation in Millimiter Wave MIMO Systems with One-Bit Quantization", in Proc. 48th Asilomar Conference on Signals, Systems and Computers, pp. 957-961, Nov. 2014. Gao Z. Gao, L. Dai, C. Yuen, Z. Wang, "Asymptotic Orthogonality Analysis of Time-Domain Sparse Massive MIMO Channels", IEEE Communications Letters, vol. 19, no. 10, Oct. 2015 Erkip-14 S. Rangan, T. S. Rappaport and E. Erkip, "Millimeter-Wave Cellular Wireless Networks: Potentials and Challenges," Proceedings of the IEEE, vol. 102, no. 3, pp. 366-385, Mar. 2014. Gupta-15 A. Gupta and R. K. Jha, "A Survey of 5G Network: Architecture and Emerging Technologies," IEEE Access, vol. 3,pp. 1206-1232, 2015. Wong-03 K. K. Wong, R. D. Murch, and K. B. Letaief, "A joint-channel diagonalization for multiuser MIMO antenna system," IEEE Trans. Wireless Commun., vol. 2, pp. 773-786, Jul. 2003. Spencer-04 Q. H. Spencer, A. L. Swindlehurst, and M. Haardt, "Zero-forcing methods for downlink spatial multiplexing in multiuser MIMO channels," IEEE Trans. Signal Process., vol. 52, no. 2, pp. 461-471, Feb. 2004. Yang-13 H. Yang and T. L. Marzetta, "Performance of Conjugate and Zero-Forcing Beamforming in Large-Scale Antenna Systems," IEEE Journal on Selected Areas in Communications, vol. 31, no. 2, pp. 172-179, Feb. 2013. Hoydis-13 J. Hoydis, S. ten Brink and M. Debbah, "Massive MIMO in the UL/DL of Cellular Networks: How Many Antennas Do We Need?," IEEE Journal on Selected Areas in Communications, vol. 31, no. 2, pp. 160-171, Feb. 2013. Fang-16 F. Fang, H. Zhang, J. Cheng and V. C. M. Leung, "Energy-Efficient Resource Allocation for Downlink Non-Orthogonal Multiple Access Network," in IEEE Transactions on Communications, vol. 64, no. 9, pp. 3722-3732, Sept. 2016. Shi-08 S. Shi, M. Schubert and H. Boche, "Rate Optimization for Multiuser MIMO Systems With Linear Processing," IEEE Transactions on Signal Processing, vol. 56, no. 8, pp. 4020-4030, Aug. 2008. Ding-16 Z. Ding, F. Adachi, and H. V. Poor, "The application of MIMO to non-orthogonal multiple access,"IEEE Trans. Wireless Commun., vol. 15, no. 1, pp. 537-552, Jan. 2016. Zte Wei Zhiqiang, Yuan Jinhong, Derrick Wing Kwan Ng, Maged Elkashlan, and Ding Zhiguo, "A Survey of Downlink Non-Orthogonal Multiple Access for 5G Wireless Communication Networks", ZTE Communications, Vol.14 No. 4, Oct. 2016. Nassar1 N. Ksairi, S. Tomasin and M. Debbah, "A Multi-Service Oriented Multiple Access Scheme for M2M Support in Future LTE," IEEE Communications Magazine, vol. 55, no. 1, pp. 218-224, Jan. 2017. Nassar2 N. Ksairi, S. Tomasin and M. Debbah, "A multi-service oriented multiple-access scheme for next-generation mobile networks," in Proc. 2016 European Conference on Networks and Communications (EuCNC), Athens, 2016, pp. 355-359.NaspatentN. Ksairi, S. Tomasin, and B. Tomasi, "Systems and Methods for Scheduling of Resources and Pilot Patterns to User Terminals in a Multi-User Wireless Network," Patent PCT/EP2016/052202, Filed on February 2, 2016.
http://arxiv.org/abs/1706.08745v1
{ "authors": [ "Alessandro Brighente", "Stefano Tomasin" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170627092319", "title": "Beamforming and Scheduling for mmWave Downlink Sparse Virtual Channels With Non-Orthogonal and Orthogonal Multiple Access" }
Degenerations of NURBS curves while all of weights approaching infinity Chun-Gang Zhucor1 December 30, 2023 ======================================================================= Automatic photo adjustment is to mimic the photo retouching style of professional photographers and automatically adjust photos to the learned style. There have been many attempts to model the tone and the color adjustment globally with low-level color statistics. Also, spatially varying photo adjustment methods have been studied by exploiting high-level features and semantic label maps. Those methods are semantics-aware since the color mapping is dependent on the high-level semantic context. However, their performance is limited to the pre-computed hand-crafted features and it is hard to reflect user's preference to the adjustment. In this paper, we propose a deep neural network that models the semantics-aware photo adjustment. The proposed network exploits bilinear models that are the multiplicative interaction of the color and the contexual features. As the contextual features we propose the semantic adjustment map, which discovers the inherent photo retouching presets that are applied according to the scene context. The proposed method is trained using a robust loss with a scene parsing task. The experimental results show that the proposed method outperforms the existing method both quantitatively and qualitatively. The proposed method also provides users a way to retouch the photo by their own likings by giving customized adjustment maps. § INTRODUCTIONWith the growing number of digital cameras especially with smartphones, photo retouching softwares have become popular among amateur photographers. As the captured photos are usually flat, many people want to adjust the tone and the color of the photos, to make the pictures to look visually more impressive and even stylized. However, the photo retouching is a hard task for the amateur users without the expertise in the photo editing. Additionally, retouching a large photo collection requires extensive human labor.For this reason, many techniques for automatic photo adjustment have been widely studied. The automatic photo adjustment automatically enhances photos' tone and color to be visually more pleasing without human actions.In the automatic photo retouching, the output styles mimic the photo styles of professional photographers. Several methods have been proposed to adjust the contrast/brightness and the color/saturation of photos <cit.> based on low-level color histogram, the brightness, and the contrast of images.However, those methods adjust photos globally by applying the same color mapping to all pixels in an image.Note that most photographers prefer locally varying adjustments in their work.Some works have focused on spatially varying photo adjustment that exploits high-level scene contexts based on the object features and the saliency <cit.>. In <cit.>, the authors use a feed-forward neural network to learn the semantics-aware photo adjustment styles of professional photographers. In the semantics-aware photo adjustment, the tone and the color mapping are dependent on the scene context, which is a local regions of a given image. The authors proposed multi-scale pooling features of the semantic label map to model the context dependency.However, the work uses hand-designed features, and it is unclear whether their hand-designed features based on inaccurate semantic label map are optmial. In addition, the learned representation of the method is not separated, and thereforeusers cannot control the adjustment by their own preference. In this paper, we propose a deep neural network (DNN) that learns the representation of the semantics-aware photo adjustment in an end-to-end manner.While we make use of the dataset from <cit.>, we approach the problem in a different way. First, the proposed network is trained in an end-to-end manner so that it fits better to the data. Our network is a bilinear model where the color and the contextual information is interacted in a multiplicative way. We exploit multi-scale convolutional neural network (CNN) features to characterize pixel-wise contextual features. Unlike <cit.>, the contextual features are learned within the network in an end-to-end manner. To efficiently train the network, we make use of a robust loss function and the multi-task learning with a scene parsing task. Second, as another type of contextual features, we introduce a semantic adjustment map. The semantic adjustment map is a binary segmentation map that discovers the photo retouching presets which vary according to the semantic contexts. The network automatically disentangles different types of presets from the original in an unsupervised manner and adjust images accordingly. By doing so, we can understand better the photo retouching styles and use the discovered presets to adjust the photos for each user's preference. Note that our photo adjustment framework is different from the image style transfer <cit.> that stylizes photos to look like artworks.Instead of focusing on the global modification of shapes and textures, we focus on the tone and the color manipulation of images. § RELATED WORKSThere has been a number of studies for the automatic photo adjustment. Several methods focus on the global tonal adjustment <cit.>, the color enhancement <cit.>, and the personalized enhancement <cit.>. Those methods are global adjustment approaches based on hand-crafted low-level features such as the color histogram, the scene brightness, and the highlight clipping. In <cit.>, Kapoor  proposed a method that discovers the clusters of users that have similar preferences of image enhancement for the personalized adjustment. While the concept of our method may be similar to those methods, the main difference is that we aim to discover the retouching presets that vary according to the local semantics.Hwang  <cit.> presented a locally varying photo enhancement method that is based on both low- and high-level contexts. Their method finds an appropriate color mapping from external images using pixel-wise contextual features. The work of Yan  <cit.> is closely related to our work. The authors combine multiple hand-crafted features including a multi-scale pooling of a scene parsing map for semantics-aware color regression. While the multi-scale pooling features were effective in modelling the semantics-aware photo adjustment, the performance is limited to the quality of the scene parsing map since the features are not trained in an end-to-end manner.Our method is also related to various deep learning based semantics-aware image processing methods. Tsai  <cit.> used a scene parsing deep network to localize a sky region and transfer a different style of sky from external images. In <cit.>, the authors propose a DNN for image harmonization, which is an encoder-to-decoder network to exploit high-level contextual features. The DNN is jointly trained with a scene parsing task to improve the training. In contrast to <cit.>, our method does not rely on the segmentation mask and rather finds the inherent segmentation masks from the data. Deep learning based colorization methods <cit.> are also related to our work in that the methods make use of rich contextual features of CNNs to estimate the color of a pixel according to the scene context. Unlike those methods, we do not reconstruct missing color channels, and the color mapping of pixels is consistent in a semantic region.§ METHOD§.§ OverviewWe define the semantics-aware photo adjustment problem as a regression problem. We want to find a regression model of the color mapping from the input color 𝐱 to the output color 𝐲 according to the semantic context that the input pixel belongs to. To this end, we propose a deep neural network that effectively learns the context dependent color mapping.fig:overview shows the overview of the proposed deep network.Our network is divided into two parts: a feature extraction network and a bilinear regression network. The feature extraction network is based on the ResNet-50 <cit.> as shown in fig:overview (a). The contextual features of the ResNet-50 are effective for modelling the semantics-aware color mapping, since we can exploit low to high level pixel-wise features that are pretrained on a large dataset. However, those convolutional features only describe the local context. For the better context modelling, the global context and the relative compositional context between scene objects would be useful. Therefore, we add a spatial RNN to extract those global and relative contexts. We adopt the ReNet <cit.> that consists of 4 directional spatial RNN layers, followed by an additional 1×1 convolution. To avoid the overfitting, we use GRU <cit.> as a spatial RNN cell with batch normalization <cit.>.The bilinear regression network shown in fig:overview (b)estimates the output color given both the input color features and the contexual features. In the following, we describe the bilinear regression network in detail. §.§ Bilinear modelBilinear models are the multiplicative interaction of all elements between two vectors <cit.>. Formally, a bilinear model is defined asf_i = 𝐚^T𝐖_i𝐛 = ∑_jk a_j b_k W_ijk,where 𝐚, 𝐛 are feature vectors, and 𝐖_i is the interaction between two vectors.In the semantics-aware photo adjustment, it is natural to think that the color mapping is determined by two factors; one is the color of a pixel and the other is the scene context that the pixel belongs to. Therefore, we use the bilinear model to represent the interaction between both factors. Since 𝐖∈ℝ^C × N × M is usually high-dimensional, we follow the low-rank bilinear pooling method of Kim  <cit.> to reduce the parameters. Based on the method, the output color ŷ is represented asŷ = σ(𝐏^T(σ(𝐔^T𝐟_clr + 𝐛)∘σ(𝐕^T𝐟_cxt + 𝐜))+𝐝),where 𝐟_clr∈ℝ^N is color features, 𝐟_cxt∈ℝ^M is context features, 𝐏∈ℝ^d × c, 𝐔∈ℝ^N × d, 𝐕∈ℝ^M × d are the decomposition of 𝐖, and 𝐛∈ℝ^d, 𝐜∈ℝ^d, 𝐝∈ℝ^c are addtional biases. ∘ is an element-wise multiplication and we use tanh as a nonlinear function σ. Note that ŷ is actually a residual since we add a skip connection between the input 𝐱 and ŷ: 𝐲 = ŷ + 𝐱.The method of Yan  <cit.> exploits an asymmetric form of bilinear model <cit.> by estimating affine transformaion matrices to map quadratic color features to output colors. On the other hand, our method is more flexible and efficient in that our bilinear model learns the nonlinear interaction of two features as well as both feature representations. For both cases, it is clear that merging two features in a multiplicative manner is beneficial for the semantics-aware photo adjustment.§.§.§ Color featuresWe use the CIELab color space for both the input and output images. We can use 3-channel Lab color as the color features. However, it generates color variations in smooth regions since each color is processed independently. To alleviate this issue, we add the local neighborhood information by concatenating the Lab color and the L_2 normalized first-layer convolutional feature maps of ResNet-50.§.§.§ Contextual features Convolutional features We first take advantage of the multi-scale convolutional features. To generate pixel-wise features from the multi-scale feature maps, we adopt the sparse hypercolumn training method <cit.>, which requires much less parameters than the deconvolutional approaches <cit.>. In the training time, we generate many training signals by randomly sampling sparse pixels from the image for the backpropagation. When we are given a small data, we can exploit both low to high level features efficiently with this approach.We use the first 3 residual blocks for the hypercolumn, which have 256, 512, and 1024 channels, respectively. As mentioned, we additionally use spatial RNN features that have 1024 channels. We normalize each feature map by its L_2 norm, concatenate them, and squeeze the feature dimension to 512 by using 1×1 convolution as shown in the option 1 of fig:overview (b). Semantic adjustment map As the convolutional features are unconstrained and smooth, they can represent rich scene contexts. However, two real-valued bilinear features are highly correlated, and it is difficult to understand which factor contributes to a specific style of color mapping. It would be better if we can separate those factors not only to interpret the retouching styles according to the scene contexts, but to make use of those styles for our own taste.To this end, we generate K-channel binary maps, of which each channel is a binary segmentation map that one of the retouching presets is applied to. For each pixel, an one-hot vector 𝐟_cxt is a categorical random variable, which is defined as𝐟_cxt = 𝐦∼ Cat({p(m_1=1|𝐱), p(m_2=1|𝐱), ..., p(m_K=1|𝐱)}),where 𝐦 is a one-hot vector sampled from a categorical probability density function p(m_k=1|𝐱). p(m_k=1|𝐱) is a probability of retouching a pixel 𝐱 using the k-th retouching preset. Similar to <cit.>, we reformulate our regression loss log p(𝐲|𝐱) using a variational lowerbound technique, which is described asL_reg = log(p(𝐲|𝐱)) = log(∑_k=1^K p(m_k=1|𝐱)p(𝐲|m_k,𝐱)) ≥∑_k=1^K p(m_k=1|𝐱) log(p(𝐲|m_k,𝐱)) = 𝔼_p(m_k=1|𝐱)[log(p(𝐲|m_k,𝐱))].In our task, K is typically small enough to compute the exact expectation if we assume that the pixels are independent to each other. In practice, however, it is likely that the problem converges to a local minimum that all retouching styles are classified to one or two classes. It is because the number of traininig examples for each retouching style is imbalanced. In other words, the optmization is dominanted by a few large classes such as the sky and the ground. In <cit.>, the authors use a class reweighting trick for class-balanced classification. Similarly, we multiply different weights to each K loss term to alleviate the issue. In contrast to <cit.>, we multiply small weights to the loss term of low-frequency classes so that small classes are easily discovered in spite of relatively small training signals. The weight is defined as𝐰_t = α×𝐚_t + (1 - α),where α controls the contribution of the weight 𝐚 to the loss.𝐚_t is the moving average of normalized soft frequences of K classes that is computed from t training batches defined asa^k_t = 0.9 × a^k_t-1 + 0.1 ×1/P∑_i p^i_t(m_k=1|𝐱),where 1/P∑_i p^i_t(m_k=1|𝐱) is the average of p_t(m_k=1|𝐱) for all pixels in a t-th batch. Our final regression loss is formulated asL_reg = 𝔼_p(m_k=1|𝐱)[𝐰_t log(p(𝐲|m_k,𝐱))].§.§ Huber lossTo generate the ground truth of adjusted photos, photographers use a segmentation tool to localize a region of a specific object to retouch. Although they thoroughly follow the procedure, some outliers may exist around object boundaries due to the incorrect segmentation. Also, the adjustment style of a photographer may not be consistent from an image to another image. Therefore, the optimization of our deep network should be robust to such outliers.As a training objective, L_2 loss is widely used in various color regression tasks <cit.>. However, DNNs easily overfit to outliers since the gradient of L_2 loss is large for those outlier samples and the optimization is dominanted by them. As an alternative to L_2, Huber loss <cit.> is more robust to outliers, which is defined asL_huber(e) = {[1/2 e^2|e| ≤δ,; δ (|e| - 1/2δ), ].where e is error and δ is the changepoint between the two loss functions. The loss is quadratic for a small error |e| ≤δ, and linear for a large error |e| > δ. As the gradient of the linear function is always δ, the contribution of outliers in the optimization is reduced. §.§ Multi-task learningUnfortunately, getting a large labeled dataset for the photo adjustment is not easy, since photo editing requires tremendous human labor. When the proposed network is trained on such a small dataset, it is highly likely to overfit to a few specific scene contexts. Since pixel-wise semantic information is the key to our semantics-aware photo adjustment, the overfitting is very severe and results in inconsistent color mappings. To mitigate this problem, we simultaneously train a scene parsing task with our task as a regularization, thereby our deep network can be generalized to any scene contexts.To train the scene parsing task, we use the SceneParse150 dataset <cit.>, which consists of 150 scmantic categories. As depicted in fig:overview, we simply add a softmax layer to the top of a contextual feature layer. Since our goal is not to make a good scene parsing network, our configuration is enough to regularize our main task. Also, our objective function changes to the followingL = L_reg + λ L_parse,where L_parse is a cross-entropy loss of scene parsing task and λ is a regularization weight. §.§ ImplementationWe implemented the proposed method using the TensorFlow running on a GeForce GTX 1080 GPU. With this setup, 500 epochs of training the network only takes several hours. Data augmentation As the number of images in the dataset is small, the data augmentation is essential. To generate more training data, we randomly rotate the input images from -10 to 10 degrees and flip horizontally. We fill empty space by repeating pixel values of image boundaries to keep the dimension of image as 512×512. As mentioned, we adopt the sparse training method <cit.> that randomly samples a few pixels for the backpropagation. By doing this, we can generate many training examples from a small dataset. In our implementation, we randomly choose 2048 pixels from an image for the sparse training. Hyperparameters We train the proposed network using the Adam <cit.> optimization method with the learning rate of 1e-4 and the batch size of 4. The ResNet-50 layers are finetuned with 0.5x lower learning rate. We set α for training the semantic adjustment map to 0.8, δ of huber loss to 0.04, and λ of cross-entropy loss of scene parsing task to 0.01 after the cross-validation. Determining the optimal number K is difficult as it is an unsupervised clustering problem. In our experiment, we found that 2, 4, and 2 for Foreground Pop-Out, Local Xpro, and Watercolor are sufficient for both the quantitative and qualitative result.§ EXPERIMENTS §.§ DatasetAs mentioned, we use the dataset from <cit.>, which is the only publicly available dataset for the semantics-aware photo adjustment. It contains 115 images from Flickr, of which the larger dimension is 512 pixels. In <cit.>, the authors select 70 images for the training and the remaining 45 images for the testing. We use the same training and testing sets for a fair comparision. But, we additionally choose 10 images from the training set for the validation. Therefore, our training set is actually smaller than that of <cit.>.In the dataset, there are 3 types of photo adjustment effects: Foreground Pop-Out, Local Xpro, and Watercolor. For the Foreground Pop-Out effect, the contrast and the color saturation of foreground salient objects are increased while those of background objects are decreased. Local Xpro effect changes the brightness/contrast and the color of objects according to the predefined profiles for each semantic category. The adjustment of Watercolor is similar to that of Foreground Pop-Out except for an additional brush effect. In <cit.>, the authors emulated the brush effect using superpixel segmentation <cit.>. As our objective is to model spatially varing color mapping not texture, we follow the same procedure in <cit.> for the brush effect. §.§ BaselinesTo show the effectiveness of the proposed method, we compare it with the method of Yan  <cit.>. As mentioned, we use the same training and the testing sets as described in <cit.> except for the validation set. We also compare various design choices of the proposed method. For the easy reading, we name the proposed deep network as Semantics-Aware Adjustment Network (SA-AdjustNet), and we compare several variations of the SA-AdjustNet: SA-AdjustNet+MSE, SA-AdjustNet+Huber, SA-AdjustNet+Huber+MT, and SA-AdjustNet+Huber+MT+S. Each suffix after the name is the variation applied. MSE and Huber refer to the type of regression loss function, MT is the multi-task learning, and S indicates the network uses the semantic adjustment map as the contextual features. The networks without S use the convolutional features instead of the semantic adjustment map. §.§ Experimental results Quantitative analysis table:quantitative shows the quantitative results of the proposed method. The values in the table are L_2 distance in the Lab color space. In most cases, the performance of the SA-AdjustNet is better than the method of <cit.> since both the color and the contextual features of our method are jointly trained with the bilinear regression network. As shown in the table, the Huber loss and the multi-task learning are both effective for the regularization of the training of the proposed network. For the SA-AdjustNet+Huber+MT+S, the performance is competitive with that of the SA-AdjustNet+Huber+MT for the Foreground Pop-Out and Watercolor since the foreground and the background are balanced. However, the classes in the Local Xpro effect are diverse and imbalanced, and the optimal clustering is more difficult even if we use the class reweighting.Qualitative analysis fig:qualitative shows some of the qualitative results from the test set. Each row of the figures show the 3 kinds of photo adjustment styles: Foreground Pop-Out, Local Xpro, and Watercolor. In most cases, the adjusted images using the proposed method are more visually pleasing and closer to the ground truth than those of Yan  <cit.>. As shown in the house of the 3rd row of fig:qualitative, the inconsistent color variation due to the incorrect segmentation is clearly reduced. fig:semantic_adjustment_map show some examples of the semantic adjustment map. The proposed network effectively discovers the inherent photo retouching styles. However, the semantic adjustment maps are discrete, and it results in the abrupt change of color around incorrect semantic boundaries as shown in the head of the man in fig:semantic_adjustment_map. This problem could be mitigated by considering neighborhood dependent models such as conditional random fields.§.§ Application: personalization of semantics-aware photo adjustmentAlthough the proposed method provides the users with automatically adjusted photos, some users may want their photos to be retouched by their own preference. In the first row of fig:qualitative for example, a user may want only the color of the people to be changed. For such situations, we provide a way for the users to give their own adjustment maps to the system. fig:personalization shows some examples of the personalization. When the input image is forwarded, we substitue the extracted semantic adjustment map with the new adjustment map from the user. As shown in the figure, the proposed method effectively creates the personalized images adjusted by user's own style. § CONCLUSIONIn this paper, we proposed a deep neural network for the semantics-aware photo adjustment. The proposed network learns the bilinear relationship between the color and the spatially varying scene context. With the semantic adjustment map, we can discover the inherent photo retouching presets within a style and apply it for the personalized photo adjustment. To effectively train the network, we use a robust loss function and the multi-task learning with the scene parsing task. The experimental results show that the proposed network outperforms an existing method both quantitatively and qualitatively. ieeetr
http://arxiv.org/abs/1706.08260v1
{ "authors": [ "Seonghyeon Nam", "Seon Joo Kim" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170626073507", "title": "Deep Semantics-Aware Photo Adjustment" }
[email protected]@[email protected]@[email protected]^aSaha Institute of Nuclear Physics, 1/AF Bidhannagar Kolkata - 700064, India^bGovernment General Degree College at Kalna-I, Burdwan - 713405, India^cVariable Energy Cyclotron Centre 1/AF Bidhannagar, Kolkata 700 064, India^dHomi Bhabha National Institute, Training School Complex, Anushaktinagar, Mumbai - 400085, IndiaWe calculate the momentum dependence of the ρ^0-ω mixing amplitude in vacuum with vector nucleon-nucleoninteraction in presence of a constant homogeneous weak magnetic field background. The mixing amplitude is generatedby the nucleon-nucleon (NN) interaction and thus driven by the neutron-proton mass difference alongwith a constant magnetic field. We find a significant effect of magnetic field on the mixing amplitude.We also calculate the Charge symmetry violating (CSV) NN potential induced by the magnetic fielddependent mixing amplitude. The presence of the magnetic field influences the NN potential substantially which can have important consequences in highly magnetized astrophysical compact objects,such as magnetars. The most important observation of this work is that the mixing amplitude is non-zero,leading to positive contribute to the CSV potential if the proton and neutron masses are taken to be equal. 12.38.Mh, 13.75.Cs, 21.30.Fe,21.65.Cdρ^0-ω mixing in the presence of a weak magnetic field Sourav Sarkar^c,d December 30, 2023 =====================================================§ INTRODUCTIONRecent years have witnessed significant progress in understanding the properties of strongly interacting nuclear matter in presence of a magnetic background <cit.>. Such studies draw their motivation both from heavy-ion collision experimentsand the physics of neutron stars. Magnetic field with the strength of eB∼(m_π^2-15m_π^2) can be achieved in the laboratoryin non-central heavy-ion collisions at RHIC and LHC <cit.>. On the other hand, a similar environment can beexpected in the interior of magnetars <cit.>. Several novel properties of the strongly interacting matter under extreme conditions have beenstudied like chiral magnetic effect <cit.>, magnetic catalysis <cit.>,inverse magnetic catalysis <cit.>, phase structure of QCD <cit.>, superconductivity of vacuum <cit.>,properties of mesons <cit.>, photon polarization  <cit.>, dilepton production <cit.> and many more. Another phenomenologically important quantity to study concerns the charge symmetry of nuclear matter and its violation.Experimentally, charge symmetry violation (CSV) can be observed in a charge-conjugate system such asthe difference between pp and nn scattering length in the ^1S_0 state with the experimental valueΔ a_ CSV = a^N_pp-a^N_nn =1.6 ± 0.6   fm/c <cit.>.Such a non-Coulombic interaction can also contribute to the binding energy difference of the light mirrornucleiwhich is known as the Nolen-Schifer (NS) anomaly <cit.>.The CSV effect has been incorporated into the neutron-proton form-factor, the hadronic τ decaycontribution <cit.>, decay of the Ψ^'→ (J/Ψ)π^0, hadronic vacuum correctionto g-2 <cit.>, pion form factor <cit.>, and isospin asymmetric nuclear matter <cit.>.At the level of QCD, CSV occurs via the small mass difference between up and down quarks and via electromagneticinteraction of quarks <cit.>.Consequently, charge symmetry is violated at the hadronic levelbecause of the neutron-proton mass difference. The major contribution to CSV is the isospin mixingof vector mesons, mainly ρ^0-ω mixing <cit.>, in single boson exchange model of the twonucleon force. Other examples of the mesons mixing are π-η and π-η^' mixing <cit.>the contribution of which is very small. The ρ^0-ω mixing is observed directly in the annihilation processe^+e^-→π^+π^- from which on-shell value of the mixing amplitude has been extracted from the experimentaldata at the ω pole and <ρ^0|H|ω> = -4520±600   MeV^2 <cit.> is obtained.However, the mixing amplitude is not momentum independent in the NN interaction, while the exchanged vector meson hasa space-like four momentum. The ρ^0-ω mixing amplitude at the ω (or ρ) pole is quite different fromits signand magnitude in the space-like region which is pertinentto the construction of the CSV NN potential.Goldman, Henderson, and Thomas <cit.> find that the NN potential has a node at around 0.9 fm implying thatthe potential changes sign. Similar results were reported using several different theoretical approaches includingmixing via qq̅ loop driven by the u-d quark mass difference <cit.>, and via NN̅ loopusing the small neutron-proton mass difference <cit.>. Soon after their study it was argued inRef. <cit.> that the strong momentum dependent mixing amplitude must vanish at the transition fromtime-like to space-like region. Moreover, QCD sum-rule <cit.>, calculation also givesa large momentum dependence of the coupling. Since the NN potential involves the space-like region, the long range NN potential is strongly suppressed by the momentum dependent of ρ-ω mixing amplitude.As argued in Ref. <cit.>, the off-shell dependence of ρ^0-ω mixing is not sufficient to determinethe CSV potential. In contrast to the momentum dependent mixing amplitude, the "mixed propagator" field theoryapproach <cit.> would restore the conventional role of the ρ^0-ω mixing.It may furtherbe noted that in asymmetric nuclear matter ρ^0-ωmixing plays an important role in determiningthe symmetry energy which in turn affects the EOS of neutron star.It has been argued in Ref. <cit.> that ρ^0-ω mixing has an important effect on the symmetry energy. In fact the symmetry energy is softened both at sub- and super-saturation densities. It is also to be noted that the change in symmetry energy modifies the equation of state (EOS) of nuclear matter. Since the mixing depends both on the magnetic field and the density of the nuclear medium, thereB-dependent mixing in vacuum and intend to extend this calculation in nuclear matter in near future. ρ^0-ω mixing in magnetic field might also affect the cooling of neutronstar via neutrino emission through NN→ NNγγ where NN cross section will be different because of theB-dependent ρ^0-ω mixing. In addition to that, the medium masses of ρ and ωwill also be affected inmagnetic field due to ρ^0-ω mixing <cit.>.To explore the possible momentum dependence of the ρ^0-ω mixing amplitudein the presence of a weak externalmagnetic field, we revisit the problem of ρ^0-ω mixing in vacuum. The mixing amplitude is generated by NN̅ loopand led by the neutron-proton mass difference along with a background magnetic field.The effect of external magnetic field onfermionic propagators is taken into account using Schwinger propagator <cit.>. In the present calculation,assuming that the magnetic field strength is weak i.e., eB ≪ m^2_ρ/ω, compatible with the strength observed in theinterior of magnetars. In the presence of a magnetic field, the momentum dependence of ρ^0-ω mixing amplitude is modified,and it will affect the CSV NN potential. Moreover, to examine the magnetic field dependent contribution, we also perform calculations with equal nucleon masses in vacuum. The paper is organized as follows. In Sec. II, we discuss the formalism required for the explicit calculation of the momentumdependent ρ^0-ω mixing amplitude in presence of a weak magnetic field.In Sec. III, we use the magnetic field dependent mixing amplitude to determine the CSV NN potential and discuss the numerical results.Finally in Sec. IV we conclude with a brief summary and discussions. Some details of the calculations are provided in the Appendix. § Ρ^0-Ω MESON MIXING AMPLITUDEIn the one-boson exchange (OBE) models, the NN interaction is mediated by the exchange of several mesons. For the purpose of this calculation, we are interested in the mixing between the neutral isovectorρ^0 meson and the isoscalar ω meson. The vector meson nucleon interaction Lagrangian corresponding toρ^0-ω mixing that we use is the following: ℒ_ω NN =g_ωΨ̅γ_μΦ^μ_ωΨ, ℒ_ρ NN =g_ρΨ̅[γ_ν+C_ρ/2 Mσ_μν∂^μ] τ·Φ^ν_ρΨ,where Ψ and Φ are the nucleon and meson fields, respectively. From the above interaction Lagrangianone can find thevertex factors Γ^μ_ω= g_ωγ^μ andΓ̃^ν_ρ=g_ρ[γ^ν+C_ρ/2Miσ^νλq_λ]. In this paper we use the coupling constants determined by the Bonn group <cit.>.The appropriate Bonn couplings are g_ω^2/4π = 10.6, g_ρ^2/4π=0.41 and C_ρ=f_ρ/g_ρ = 6.1.In the present calculation, NNω tensor coupling is not included for its negligible contribution. The ρ^0-ω mixing amplitude is generated because of the difference between proton and neutron loop contribution as shownin Fig.<ref>: Π^μν_ρω(q^2) =Π^μν(p)_ρω(q^2) - Π^μν(n)_ρω(q^2),where p(n) stands for proton (neutron).The polarization tensor of ρ^0-ω mixing due to NN excitations is calculated using standard Feynman rules and is given byiΠ^μν(N)_ρω (q^2) = ∫d^4k/(2π)^4 Tr[Γ^μ_ω(q)S_N(k)Γ̃^ν_ρ(-q) S_N(k+q)],where subscript N denotes either p (proton) or n (neutron). The Feynman propagator for the neutron isS_n(k) = k/+m_n/k^2-m_n^2 To include the effect of a constant background magnetic field, we use Schwinger's proper time method <cit.>.Without any loss of generality, we assume the magnetic field B along the z direction. As we are interested in theweak field regime, i.e., eB ≪ m^2_ρ/ω, the magnetic field dependent proton propagator can be written as powerseries in eB, that up to order (eB)^2 read as <cit.> S_p(k) = S^(0)(k)+ S^(1)(k)+ S^(2)(k)whereS^(0)(k)= k/+m_p/k^2-m_p^2S^(1)(k)=eBiγ_1γ_2(γ· k_||+m_p)/(k^2-m_p^2)^2S^(2)(k)=(eB)^2-2k_⊥^2/(k^2-m^2_p)^4[k/+m_p-γ· k_⊥/k^2_⊥(k^2-m_p^2)]We decompose the metric tensor into two parts g^μν=g^μν_||-g^μν_⊥,where g^μν_|| =diag(1,0,0,-1) and g^μν_⊥ =diag(0,1,1,0). Also,we use k_||^2 = k_0^2-k_3^2 and k_⊥^2 = k_1^2+k_2^2. The magnetic field independent vacuum contribution to the self-energy is iΠ^μν(N)_ρω (q^2)= ∫d^4k/(2π)^4 Tr[Γ^μ_ω(q)S_N(k)Γ̃^ν_ρ(-q) S_N(k+q)]=g_ω g_ρ∫d^4k/(2π)^4T^μν(k,k+q)1/(k^2-m_N^2+iϵ)((k+q)^2-m_N^2+iϵ)where T^μν(k,k+q) = (2k^μ k^ν + k^μ q^ν + k^ν q^μ-g^μν(k^2+k· q-m_N^2) +C_ρ/2 Mm_N(g^μνq^2-q^μ q^ν))After the momentum integration, one may write the field free polarization tensor asΠ^μν(N)_ρω(vac) (q^2)=(-g^μν+q^μ q^ν/q^2)Π^(N)_ρω(vac) (q^2), where Π^(N)_ρω(vac) (q^2) =-g_ρ g_ω/4π^2q^2∫_0^1 dx [2x(1-x)+C_ρ/2 Mm_N] (1/ϵ-γ_E-ln(Δ/μ^2)),where Δ = m_N^2-x(1-x)q^2, μ is an arbitrary renormalization scale. γ_E is the Euler-Mascheroni constant and ϵ=2-d/2 contains the singularity, which diverges as d→4.Since the individual self-energy contribution of proton and neutron diverges, the singularity can be removed by the differencebetween proton and neutron loop contribution and we obtain the magnetic field independent mixing amplitude as Π_ρω(vac) (q^2)= Π^(p)_ρω(vac) (q^2)-Π^(n)_ρω(vac) (q^2)= g_ρ g_ω/4π^2q^2∫_0^1 dx (2x(1-x)+C_ρ/2) ln[m_p^2-x(1-x)q^2/m_n^2-x(1-x)q^2] It can clearly be seen that if we do not distinguish between the proton and neutron mass, the mixing amplitude vanishes.In absence of magnetic field, the CSV NN potential in vacuum does not exist for m_p=m_n. We now discuss the magnetic field dependent ρ^0-ω mixing amplitude. In this paper, we are mainly concerned with the B-dependent mixing amplitude up to 𝒪((eB)^2) which is reasonable in the weak field regime. The first order contribution of magnetic field to ρ^0-ω mixing is (as explicitly shown in the Appendix A )iΠ^μν1(p)_ρω(vac) (q^2)= ∫d^4k/(2π)^4 Tr[Γ^μ_ω(q)S^(0)_p(k)Γ̃^ν_ρ(-q) S^(1)_p(k+q) +Γ^μ_ω(q)S^(1)_p(k)Γ̃^ν_ρ(-q) S^(0)_p(k+q)] iΠ^μ1(p)_ μ,ρω(vac) (q^2) = -8i C_ρ/2 M m_p eB g_ω g_ρ∫d^4k/(2π)^4ϵ^αλρσk_α q_λb_ρ u_σ1/(k^2-m_p^2)((k+q)^2-m_p^2)^2 = 0Hence, the linear order contribution of order eB vanishes.The second order contribution of magnetic field in ρ^0-ω mixing is given by(see Appendix B for details)iΠ^μν2(p)_ρω(vac) (q^2)= ∫d^4k/(2π)^4 Tr[Γ^μ_ω(q)S^(2)_p(k)Γ̃^ν_ρ(-q) S^(0)_p(k+q) +Γ^μ_ω(q)S^(0)_p(k)Γ̃^ν_ρ(-q) S^(2)_p(k+q)+ Γ^μ_ω(q)S^(1)_p(k)Γ̃^ν_ρ(-q) S^(1)_p(k+q)]Π^μ2(p)_ μ,ρω(vac) (q^2)=(eB)^2 g_ω g_ρ/π^2∫_0^1 dx [x^3 [1/Δ+x(1-x)q^2+x(4x-1)q^2_⊥+2m_p^2/3Δ^2 +2x^2[x(1-x)q^2+2m_p^2]q^2_⊥/3Δ^3]+x^2 [1/Δ-x(1-x)q^2_⊥/Δ^2] -x(1-x)[1/2Δ+2x(1-x)q^2_||-m_pC_ρ/2M̅(xq^2-(x+1)q^2_||)/4Δ^2]] It is clearly seen that the contribution of the magnetic field dependent mixing amplitude is finite; i.e., no divergences appearin the weak field limit. The correction term that is quadratic in field strength eB contributes tothe ρ^0-ω mixingamplitude and we can express the magnetic field dependent part asΠ^ eB_ρω(vac) = -1/3Π^μ2(p)_ μ,ρω(vac).In presence of the external magnetic field, the total contribution to the mixing amplitude can be written as Π_ρω^ t(q^2)= Π_ρω(vac) (q^2)+ Π^ eB_ρω(vac)(q^2) In absence of magnetic field, we obtain the mixing amplitude at the on-shell ω and ρ meson pointΠ_ρω(m^2_ω) = -4314 MeV^2 andΠ_ρω(m^2_ρ) = -4152 MeV^2 respectively,which compares well with the experimental values <cit.>.In Fig. <ref>(a) we have shown the variationof the mixing amplitude at the point (q^2=m^2_ρ/ω) with weak external magnetic field. We have used thecondition that the strength of the external field is much lower than the square of the vector meson mass,i.e., eB ≪ m^2_ρ/ω. In both the meson mass, we have observed that the the mixing amplitude,Π_ρω(q^2=m^2_ρ/ω) decreases with the increase of external magnetic field strength. In presence of background magnetic field, the mixing amplitude is non-zero, even in the limit m_p=m_nas shown in Fig. <ref>(b). It is seen that, taking the limit (m_p=m_n), the mixing amplitudevanishes at eB =0 and hence, we see a decreasing behavior of mixing amplitude with increasing eB. The momentum dependence of the ρ^0-ω mixing amplitude is displayed in Fig. <ref> at a different magneticfield strength. In absence of eB, the mixing amplitude has a node at exactly q^2=0 <cit.>and, consequently, there is a change of sign of the mixing amplitude.Fig. <ref>(a) displays the mixing amplitudewhich is diminished with increasing values of eB at same values of q^2_⊥.It is also clearly noticed that the value of Π_ρω^ t decreases with the increase of q^2_⊥ at fixed values ofbackground magnetic field. Similar behavior can be observed in Fig. <ref>(b) where eB is varied keepingq^2_|| fixed. The effect of magnetic field on the mixing amplitude is greater in the time-like region than the space-like region. It is clearly visible that the node is shifted towards the space-like region in presence of magnetic field. § CHARGE SYMMETRY VIOLATING POTENTIALNow we will evaluate the CSV NN potential induced by the ρ^0-ω mixing in presence of an external weak magnetic field.The momentum space CSV potential due to ρ^0-ω mixing is given by <cit.>:V^NN_ρω( q) = - g_ω g_ρΠ_ρω^ t( q)/( q^2+m^2_ρ)( q^2+m^2_ω) Here, we neglected the contribution due to the external legs.Because of the extended structure of hadrons, one needs to incorporate meson-nucleon vertex correction which would besufficient to take into account the inner structure of the hadrons. In our analysis, form factors are introduced by parameterizing the point coupling as <cit.>:g_i→ g_i(Λ_i^2-m^2_i/Λ_i^2+ q^2)The cutoff parameter Λ_i can be related directly to the hadron size and the numerical values for the cutoffs (Λ_i) are determined from the fit of the empirical NN data <cit.>.To convert the CSV potential to configuration space, we make use of the identity 1/( q^2+m^2_ρ)( q^2+m^2_ω) = 1/m^2_ω-m^2_ρ(1/ q^2+m^2_ρ -1/ q^2+m^2_ω),and find the CSV potential with the on-shell mixing amplitude in coordinate space through the Fourier transformation of Eq. <ref>. This yields the resultV^NN_ρω(r) = -g_ω g_ρ/4πΠ_ρω^ t(m^2_ω)/m^2_ω-m^2_ρ(e^-m_ρ r/r-e^-m_ω r/r)With the inclusion of form factors the CSV potential reduces to V^NN_ρω(r)=-g_ω g_ρ/4πΠ_ρω^ t(m^2_ω)/m^2_ω-m^2_ρ[Λ^2_ω-m^2_ω/Λ^2_ω-m^2_ρe^-m_ρ r/r- Λ^2_ρ-m^2_ρ/Λ^2_ρ-m^2_ωe^-m_ω r/r+ m^2_ω-m^2_ρ/Λ^2_ω-Λ^2_ρ(Λ^2_ω-m^2_ω/Λ^2_ρ-m^2_ωe^-Λ_ρ r/r-Λ^2_ρ-m^2_ρ/Λ^2_ω-m^2_ρe^-Λ_ω r/r)] It is to be noted that in the limit Λ_i→∞, Eq. <ref> reduces to Eq. <ref>.In Fig. <ref> we show the contribution to the NN potential in configuration space with the constant on-shellmixing amplitude. We see that there is a stronger suppression of the NN potential going from the point coupling to theform factor. Magnetic field dependent mixing amplitude leads to a clear enhancement of the NN potential compared withthe magnetic field independent on-shell mixing amplitude. As we have already mentioned the ρ^0-ω mixing amplitude is strongly dependent on momentum. Here, we calculate the contribution of the off-shell dependence of ρ^0-ω mixing in the CSV potential. Magnetic field independent CSV potential can be obtainedanalytically <cit.> but in case of non-zero eB we discuss the numerical results. We solve the magnetic field dependent CSV potential for two special cases: (a) B||r and (b) B⊥ r. In Fig. <ref> we present the role of the off-shell contribution of ρ^0-ω mixing in the CSV NN potential. The contribution of the background magnetic fieldto the NN potential is clearly shown in both the graphs. We see that the B-independent CSV potential have a node around0.9 fm  <cit.> with form factors. As the magnetic field is turned on, the occurrence of the node inthe potential is around at 0.35 fm (at eB = 0.05  GeV^2). We also notice that a non-zero B-dependent ρ^0-ωcontribution to the NN interaction is found to be much larger than without B-dependent mixing amplitude. It is also interestingto examine the CSV potential in presence of weak field regime at m_p=m_n, as is shown in the inset of Fig. <ref>.It is seen that the effect of magnetic field on the NN potential is found to be always positive in space-like region, andconsequently, there is no node in the NN potential which leads to a significant effect on CSV.§ SUMMARY AND CONCLUSIONIn the present paper, we have investigated the momentum dependence of ρ^0-ω mixing amplitude as well as the role ofmomentum dependence of ρ^0-ω mixing amplitude in CSV NN potential in the presence of an external magnetic field forthe first time. The ρ^0-ω mixing was assumed to be generated by the NN loops and hence driven by the neutron-protonmass difference along with a constant magnetic field. We have restricted ourselves to the weak field limit, where the externalfield satisfies eB≪ m^2_ρ/ω and used the Schwinger's proper-time method to describe the fermionic propagator.The effect of the background magnetic field appears as a correction to the momentum dependence of ρ^0-ω mixing amplitude,which is relevant to study the properties of magnetars and magnetized hadronic medium relativistic heavy-ion collisions.Although in the weak field limit, the first correction is quadratic in the field. One has to also take into account thelinear order corrected fermionic propagator in B. We find that the presence of themagnetic field modifies the mixing amplitude. It is seen that the mixing amplitude decreases with the increase of thestrength of the magnetic field at the on-shell meson mixing point. This happens even if the Hamiltonian preserves the isospin symmetry, i.e., m_p=m_n. It is important to note that the change inthe sign of the momentum dependence of ρ^0-ω mixing amplitude is shifted towards the space-like region for non-zeroeB in contrast to the result found in the absence of magnetic field. Furthermore, the NN potential generated by the off-shelldependence of ρ^0-ω mixing is evaluated numerically. We have found that a node in the NN potential occurrs atr ∼ 0.35 fm for eB = 0.05  GeV^2. Interestingly, we also find that the effect of the magnetic field to the NNpotential is always positive in the space-like region if we assume that each of the nucleon masses are taken to be equal. Moreover, one needs to extend this calculation in the dense medium to study the changes in various properties of magnetars.§ APPENDIX A: CALCULATION OF Π^Μ1(P)_ Μ,ΡΩ(VAC)We have iΠ^μν1(p)_ρω(vac) = ∫d^4k/(2π)^4eBg_ω g_ρ[ T_1^μν1/(k-m_p^2)((k+q)^2-m_p^2)^2+T_2^μν1/(k-m_p^2)^2((k+q)^2-m_p^2)]where T_1^μν1 =Tr[γ^μ(k/+m_p)(γ^ν-C_ρ/2M̅iσ^νλq_λ) iγ_1γ_2(γ· (k+q)_||+m_p)],T_1^μν2 =Tr[γ^μ iγ_1γ_2(γ· k_||+m_p) (γ^ν-C_ρ/2M̅iσ^νλq_λ)((k/+q/)+m_p)We use iγ_1γ_2 = -γ^5 b/u/, with u^μ = (1,0,0,0) and b^μ = (0,0,0,1). Using that T^μ1_ μ1 =-4i C_ρ/2M̅ m_p ϵ^αλρσk_α q_λ b_ρ u_σT^μ1_ μ2 =-4i C_ρ/2M̅ m_p ϵ^αλρσ(k+q)_α q_λ b_ρ u_σTherefore, we can write the linear order contribution of magnetic field in the ρ^0-ω mixing amplitude iΠ^μ1(p)_ μ,ρω(vac) (q^2) = -8i C_ρ/2 M̅ m_peBg_ω g_ρ∫d^4k/(2π)^4ϵ^αλρσk_α q_λb_ρ u_σ1/(k^2-m_p^2)((k+q)^2-m_p^2)^2=-8i C_ρ/2 M̅ m_peBg_ω g_ρ∫_0^1 dx 2x ∫d^4k/(2π)^4ϵ^αλρσ q_λb_ρ u_σ(k-xq)_α/[k^2-Δ]^3= 0Here, the integration involving linear terms in k is zero and ϵ^αλρσ q_α q_λ =0 due tothe antisymmetric properties of Levi-Civita tensor.§ APPENDIX B: CALCULATION OF Π^Μ2(P)_ Μ,ΡΩ(VAC)We have iΠ^μν2(p)_ρω(vac) = ∫d^4k/(2π)^4(eB)^2g_ω g_ρ[ T_1^μν2-2k^2_⊥/(k^2-m_p^2)^4((k+q)^2-m_p^2)+ T_2^μν2-2(k+q)^2_⊥/(k^2-m_p^2)((k+q)^2-m_p^2)^4+T_3^μν21/(k^2-m_p^2)^2((k+q)^2-m_p^2)^2]whereT_1^μν2 =Tr[γ^μ(k/+m_p-γ· k_⊥/k_⊥^2(k^2-m_p^2)) (γ^ν-C_ρ/2M̅iσ^νλq_λ) (k/+q/+m_p)],=4[k^μ p^ν+p^μ k^ν-g^μν(k· p)-k^2-m^2_p/k^2_⊥ (k_⊥^μ p^ν+p^μ k_⊥^ν-g^μν(k_⊥· p))+g^μνm^2_p -m_pC_ρ/2M̅[q^μ p^ν -g^μν(p· q)+g^μν(k· q)-q^μ k^ν -k^2-m^2_p/k^2_⊥(g^μν(q· k_⊥)-q^μ k^ν_⊥)] ]T^μν2_2=Tr[γ^μ(k/+m_p)(γ^ν-C_ρ/2M̅iσ^νλq_λ) (k/+q/+m_p-γ· (k+q)_⊥/(k+q)_⊥^2((k+q)^2-m_p^2))],= 4[k^μ p^ν+p^μ k^ν-g^μν(k· p)-p^2-m^2_p/p^2_⊥ (k^μ p_⊥^ν+p_⊥^μ k^ν-g^μν(k· p_⊥))+g^μνm^2_p +m_pC_ρ/2M̅[q^μ k^ν -g^μν(k· q)+g^μν(p· q)-q^μ p^ν -p^2-m^2_p/p^2_⊥(g^μν(q· p_⊥)-q^μ p^ν_⊥)] ]where p=k+q. Now, we replace k↔ k+q and we findT^μν2_2 =4[k^μ p^ν+p^μ k^ν-g^μν(k· p)-k^2-m^2_p/k^2_⊥ (k_⊥^μ p^ν+p^μ k_⊥^ν-g^μν(k_⊥· p))+g^μνm^2_p +m_pC_ρ/2M̅[q^μ p^ν -g^μν(p· q)+g^μν(k· q)-q^μ k^ν -k^2-m^2_p/k^2_⊥(g^μν(q· k_⊥)-q^μ k^ν_⊥)] ]and T^μν2_3=Tr[γ^μ iγ_1γ_2(γ· k_||+m_p) (γ^ν-C_ρ/2M̅iσ^νλq_λ) iγ_1γ_2(γ· (k+q)_||+m_p)]The contribution of the magnetic field comes from the 𝒪((eB)^2) terms:iΠ^μ2(p)_ μ,ρω(vac) = ∫d^4k/(2π)^4(eB)^2g_ω g_ρ[32[(k^2+k· q-2m_p^2)k^2_⊥/(k^2-m_p^2)^4((k+q)^2-m_p^2) +k^2_⊥+k_⊥· q_⊥/(k^2-m_p^2)^3((k+q)^2-m_p^2)]+ 8 k· (k+q)_|| +4 m_pC_ρ/2M̅(k_||· q_||-k· q -q_||^2)/(k^2-m_p^2)^2((k+q)^2-m_p^2)^2] Using the standard procedure of Feynman parametrization and evaluation of the momentum integral and Eq. <ref>reduce to Eq. <ref>50lectnote871D. E. Kharzeev, K. Landsteiner, A. Schmitt and H. U. Yee, Lect. Notes Phys. 871, 1 (2013).nuclphy803 D. E. Kharzeev, L. D. McLerran and H. J. Warringa,  Nucl.Phys. A 803,227 (2008). ijmp_24 V. Skokov, A. Y. Illarionov, and V. Toneev, Int. J. Mod. Phys. A 24, 5925 (2009).AJ392 R. C. Duncan and C. Thompson, Astrophys. J. 392, L9 (1992) prl95 E. J. Ferrer, V. de la Incera and C. Manuel, Phys. Rev. Lett. 95, 152002 (2005).npb747 E. J. Ferrer, V. de la Incera and C. Manuel, Nucl. Phys.B747, 88 (2006). prd76 E. J. Ferrer and V. de la Incera, Phys. Rev. D 76, 045011 (2007). prl100 K. Fukushima and H. J. Warringa, Phys. Rev. Lett. 100, 032007 (2008). prl105 B. Feng, D. Hou, H. c. Ren and P. P. Wu, Phys. Rev. Lett. 105, 042001 (2010). prd82 S. Fayazbakhsh and N. Sadooghi, Phys. Rev. D 82, 045010 (2010).nuclphy797 D. E. Kharzeev, and A. Zhitnitsky, Nucl.Phys. A 797,67 (2007). prd78 K. Fukushima, D. E. Kharzeev and H. J. Warringa,Phys.Rev. D 78, 074033 (2008). annphy_325 D. E. Kharzeev, Ann. Phys. (N.Y)  325, 205 (2010).nuclphy462V. P. Gusynin, V. A. Miransky and I. A. Shovkovy, Nucl.Phys.B462, 249 (1996),Nucl.Phys.B563, 361 (1999).jhep1202G. S. Bali, F. Bruckmann, G. Endrodi, Z. Fodor, S. D. Katz, S. Kreig, A. Schafer and K. K. Szabo, JHEP 1202, 044 (2012).prd82_085011 M. N. Chernodub, Phys. Rev. D 82, 085011 (2010).prl106M. N. Chernodub, Phys. Rev. Lett. 106, 142003 (2011). arxiv1208 M. N. Chernodub,  Lect. Notes Phys.  871, 143 (2013).prd86 J. O. Andersen, Phys. Rev. D 86 025020 (2012). plb_728 G. Colucci, E. S. Fraga, and A. Sedrakian, Phys. Lett.B728 19 (2014). prd91 H. Liu, L. Yu, and M. Huang, Phys. Rev. D 91, 014017 (2015). PhyRevD_93 S. P. Adhya, M. Mandal, S. Biswas, and P. K. Roy, Phys. Rev. D 93, 074033 (2016).prd94_09403 S. Ghosh, A. Mukherjee, M. Mandal, S. Sarkar, and P. Roy, Phys. Rev. D 94, 094043 (2016). arxiv_1704 S. Ghosh, A. Mukherjee, M. Mandal, S. Sarkar, and P. Roy, arXiv:1704.05319 [hep-ph]. prd_sub A. Mukherjee, S. Ghosh, M. Mandal, S. Sarkar, and P. Roy, Submitted in Phys. Rev. D.annphy_330 K. Hattori and K. Itakura, Ann. Phys. (Amsterdam)330 23 (2013),224, 58 (2013). prd88 F. KarbsteinPhys. Rev. D88 085033 (2013).prc88 K. Tuchin, Phys. Rev. C 88 024910 (2013). prd94 A. Bandyopadhyay, C. A. Islam and M. G. Mustafa, Phys. Rev. D94, 114034 (2016). annphy_376 N. Sadooghi and F. Taghinavaz, Annals Phys. 376, 218 (2017). arxiv_1704_01364 A. Bandyopadhyay and S. Mallik, arXiv:1704.01364 [hep-ph].PhyRep194 G. A. Miller, M. K. Nefkens, and I. Slaus, Phys. Rep. 194, 1 (1990). plb_444 C. R. Howellet al., Phys. LettB444, 252 (1998). prl83 D. E. Gonzalez Trotteret al., Phys.Rev.Lett83,3788 (1999).annrevnucsci19 J. A. Nolen, and J. P. Schiffer, Annu. Rev. Nucl. Sci. 19, 471 (1969). phylett11 K. Okamoto, Phys. Lett. 11, 150 (1964). plb277 L. N. Epele, H. Fanchlottl, C. A. Garcia Canal, and G. A. Gonzfilez Sprlnberg, Phys. LettB277, 33 (1992). prc63 R. Machleidt, and H. Muther,Phys. Rev. C 63, 034005 (2001).annrevnucsci56 G. A. Miller, A. K. Opper, and E. J. Stephenson, Ann. Rev. Nucl. Part. Sci. 56, 253 (2006). prd83 C. E. Wolfe and K. Maltman, Phys. Rev. D 83, 077301 (2011). ppnp39 H. B. O'Connell, B. C. Pearce, A. W. Thomas, and A. G. Williams, Prog. Part. Nucl. Phys.39, 201 (1997).plb339 A. K. Dutt-Mazumder, B. Dutt-Roy, and A. Kundu, Phys. Lett.B399, 196 (1997). prc63_015204A. K. Dutt-Mazumder, R. Hofmann, and M. Pospelov, Phys. Rev. C 63, 015204 (2000).jpg35 P. Roy, A. K. Dutt-Mazumder, S. Sarkar, and Jan-e Alam,J. Phys.G35, 065106, (2008).npa249 P. C. McNamee, M. D. Scadron, and S. A. Coon, Nucl. Phys.A249, 483 (1975). npa287 S. A. Coon, and M. D. Scadron, Nucl. Phys.A287, 381 (1977). prc26 S. A. Coon and M. D. Scadron, Phys. Rev. C26, 562 (19982). prc48 J. Piekarewicz, Phys. Rev. C48,1555 (1993). PhyRevC_36 S. A. Coon and R. C. Barrett,Phys. Rev. C 36, 2189 (1987). PhyRevD_62 T. K. Chyi, C. W. Hwang, W.F. Kao, G.L.Lin, K. W. Ng, and J. J. Tseng, Phys. Rev. D62, 105014 (2000). FBS_12 T. Goldman, J. A. Henderson, and A. W. Thomas, Few-Body Syst.12, 123 (1992).PhyLettB_317 G. Krein, A. W. Thomas, and A.G. Williams Phys. Lett.B317 293 (1993).PhyLettB_336 H. B. O'Connell, B.C. Pearce, A.W. Thomas, and A.G. Williams, Phys. Lett.B336 1 (1994).PhyRevC_47 J. Piekarewicz and A. G. Williams, Phys. Rev. C 47, R2462 (1993).PhyRevC_49 T. Hatsuda, E. M. Henley, Th. Meissner and G. Krein,Phys. Rev. C 49, 452 (1994).PhyRevC_52 T. D. Cohen and G. A. Miller, Phys. Rev. C 52, 3428 (1995).Aip_412 S. A. Coon, B. H. J. McKellar, and A. A. Rawlinson, AIP Conf. Proc. 412, 368 (1997). Ppnp_39 H.B. O’Connell, B.C. Pearce, A.W. Thomas, and A.G. Williams, Prog. Part. Nucl. Phys.39, 201 (1997).WS_127 G. A. Miller and W. H. T. van Oers, in Symmetries and Fundamental Interactions in Nuclei, edited by W. C. Haxton andE. M. Henley (World Scientific, Singapore, 1995), p. 127. PhyRevC_80 Wei-Zhou Jiang and Bao-An Li, Phys. Rev. C 80, 044322 (2009).PhyRep82J. Schwinger, Phys. Rev. 82,664 (1951).PhyRevC_78 S. Biswas, P. Roy, and A. K. Dutt-Mazumder, Phys. Rev. C 78, 045207 (2008).
http://arxiv.org/abs/1706.08895v1
{ "authors": [ "Mahatsab Mandal", "Arghya Mukherjee", "Snigdha Ghosh", "Pradip Roy", "Sourav Sarkar" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170627145117", "title": "$ρ^0-ω$ mixing in the presence of a weak magnetic field" }
The Whitham equation]On Whitham and related equationsC. Klein]Christian KleinInstitut de Mathématiques de Bourgogne, UMR 5584 Université de Bourgogne-Franche-Comté, 9 avenue Alain Savary, 21078 Dijon Cedex, [email protected] F. Linares]Felipe LinaresIMPAEstrada Dona Castorina 110Rio de Janeiro 22460-320, RJ [email protected] D. Pilod]Didier Pilod Instituto de Matemática, Universidade Federal do Rio de Janeiro, Caixa Postal 68530 CEP 21941-97, Rio de Janeiro, RJ Brasil [email protected] J.-C. Saut]Jean-Claude Saut Laboratoire de Mathématiques, UMR 8628,Université Paris-Sud et CNRS,91405 Orsay, France [email protected] [ [ June 25, 2017 =================The aim of this paper is to study,via theoretical analysis andnumerical simulations, the dynamics of Whitham and related equations.In particular we establish rigorous bounds between solutions of theWhitham and KdV equations and provide some insights into the dynamics of the Whitham equation in different regimes, some of them being outside the range of validity of the Whitham equation as a water waves model. § INTRODUCTIONThe Whitham equation was introduced formally in <cit.> as analternative to the KdV equation, by keeping the exact dispersion of the water waves system.No rigorous derivation from the water waves system is known, and there is no consistent scaling allowing to derive it.Actually the idea in <cit.> was to propose a model describing also the occurrence of waves of greatest height with the Stokes 120 degreesangle at the crest which is clearly impossible with the KdV equation.As was conjectured in <cit.> and later proved in <cit.>, thisis actually the case (with a different angle though) but this wave breakingphenomena seems to be irrelevant in the (KdV) regime where the Whitham equation can bereally viewed as a consistent water wave model, as will be discussed in the present paper. In short, and as was noticed by Whitham, the desired qualitative effect is inbut this wave breaking effect cannot be quantitatively linked with the water waves system and thus to realwaves. From the modeling point of view, the advantage of the Whitham equation with respect to KdV seemsto be the enlargement ofthe range of validity of this asymptotic model in terms of frequencies though this isnot so easy to quantify. On the other hand, due to the very different behavior of the dispersion at low and large frequencies, the Whitham equation is a fascinating mathema-tical object since, as will be discussed later, it has three interesting different asymptotic regimes. One should thus distinguish between the usefulness of the Whithamequation as a relevant water waves model which seems to be poor (see <cit.> for some preliminary comparisons with experiments) andits mathematical properties that are rich, in particular as a usefuland relevant toy model to provide some insights intothe effect ofaddinga weakdispersionterm to a nonlinear hyperbolic equation. We will restrict in this paper to the Whitham equation posed on thereal line. Many interesting aspects of the periodic problem, in particular concerning periodic solitary waves, can be found in <cit.>. However our results concerning the Cauchy problem and its link with the KdV one are straightforwardly valid in the periodic case.In order to make the link with the KdV equation (and thus with the full water wave system) we will write the Whitham equation asin <cit.> : u_t+ℒ_ϵ u_x+ϵ uu_x=0,where the non local operator ℒ_ϵ is related to the dispersion relation of the (linearized) water waves system and is defined byℒ_ϵ =l(√(ϵ)D):=(tanh√(ϵ)|D|/√(ϵ)|D|)^1/2and D=-i∇=-i∂/∂ x. The (small) parameter ϵ measures the comparable effects of nonlinearity and dispersion (see below). The Whitham equation is supposed to be an approximation of the full water waves system on time scales of order 1/ϵ, see <cit.> and Section 2 below. Taking the formal limit √(ϵ)|ξ| → 0 inℒ_ϵ, (<ref>) reduces to the KdV equationu_t+u_x +ϵ uu_x+ϵ/6u_xxx=0. We do not know of a complete rigorous justification of the Whitham equation from the water waves system. More precisely no correct scaling seems to exist allowing to connect directly and rigorously the Whitham equation and the water wave system. See however <cit.> pages 213-214 and Section 2 below where a comparison with the KdV equation is displayed, justifying thus the Whitham equation via the KdV approximation of weakly nonlinearlong surface water waves. On the other hand the Whitham equationcan be viewed as the one-dimensional restriction of the full dispersion KP equation introduced in <cit.> to overcome the badbehavior of the dispersion relation of the usual KP equations at low frequencies in x (see also the analysis in<cit.>). We refer to <cit.> for a further study of the Cauchy problem and to <cit.> for the existence of localized solitary waves, closeto the usual KP I ones in the case of strong surface tension): ∂_t u+c̃_WW(√(ϵ)|D^ϵ|)(1+ϵD_2^2/D^2_1)^1/2 u_x+ϵ3/2 uu_x=0,withc̃_WW(√(ϵ)k)=(1+βϵ k^2)^1/2(tanh√(ϵ)k/√(ϵ)k)^1/2,where β≥0 is a dimensionless coefficient measuring the surface tension effects and |D^ϵ|=√(D_1^2+ϵ D_2^2),D_1=1/i∂_x, D_2=1/i∂_y. The link with the full water wave system is via the choice of parameters (see <cit.>). Denoting by h a typical depth of the fluid layer, a a typical amplitude of the wave, λ_x and λ_y typical wave lengths in x and y respectively, the relevant regime here is whenϵ∼a/h∼(λ_x/λ_y)^2∼(h/λ_x)^2≪ 1. For purely gravity waves, β =0, (<ref>) becomes ∂_t u+ c_WW(√(ϵ)|D^ϵ|)(1+ϵD_2^2/D^2_1)^1/2 u_x+ϵ3/2 uu_x=0,withc_WW(√(ϵ)k)=(tanh√(ϵ)k/√(ϵ)k)^1/2, which reduces to theWhitham equation (<ref>) when u does not depend on y. In presence of surface tension (β>0) (<ref>) reduces when u does not depend on y to the capillary Whitham equation (see <cit.>)u_t+ℒ_ϵ u_x+ϵ uu_x=0,whereℒ_ϵ =(1+βϵ|D|^2)^1/2(tanh√(ϵ)|D|/√(ϵ)|D|)^1/2,β>0.Taking the formal limit √(ϵ)|ξ| → 0 inℒ_ϵ, (<ref>) reduces to the KdV equationu_t+u_x+ϵ/2(1/3-β) u_xxx+ϵ uu_x=0.Note also that the Whitham equation with surface tension looks, for high frequencies, like the following fractional KdV (fKdV) equation with α=1/2u_t+u_x+uu_x-|D|^α u_x=0.We refer to <cit.> for some properties of the fKdV equations viewed as toy models to study the influence of a weakdispersive perturbation on the dynamics of the Burgers equation.There is a Boussinesq like, system version of (<ref>) for waves propagating in both directions. As for the Whitham equation it cannot been derived directly from the water wave system by a consistent asymptotic analysis but by a rather heuristic and formal argument. Actually, one obtains a full dispersion system when in the Boussinesq regime one keeps (formally) the original dispersion of the water waves system (see <cit.>, <cit.>, and <cit.> where interesting numerical simulations of the propagation of solitary waves are performed).[As noticed in <cit.>, the use of nonlocal models for shallow water waves is also suggested in <cit.>.]Setting again ℒ_ϵ =(tanh√(ϵ)|D|/√(ϵ)|D|)^1/2, we get with D=-i∇ or -i∂_x:{[ η_t+ℒ_ϵ ^2u_x+ϵ (η u)_x=0;u_t+ η_x+ϵ uu_x=0, ].when d=1 and{[ η_t+ℒ_ϵ^2 ∇· u+ϵ∇·(η u)=0; u_t+∇η+ϵ/2∇ | u|^2=0, ].when d=2.Takingthe limit √(ϵ)|ξ| → 0 in ℒ_ϵ,(<ref>) reduces formally to{[ η_t+u_x+ϵ/3 u_xxx+ϵ (η u)_x=0;u_t+ η_x+ϵ uu_x=0, ].while in the two-dimensional case,(<ref>) reduces in the same limit to{[ η_t+∇· u+ϵ/3Δ∇· u+ϵ∇·(η u)=0;u_t+∇η+ϵ/2∇ | u|^2=0, ].that is to the (linearly ill-posed) system one getsby expanding to first order the Dirichlet to Neumann operator with respect to ϵ in the full water wave system (see <cit.>). On the other hand, (<ref>) and (<ref>) are linearly well-posed. As will be seen below the Cauchy problem for the nonlinear system is locally well-posed under a non physical condition on the initial data though.System(<ref>) is also known in the Inverse Scattering community as the Kaup-Kupperschmidtsystem (see <cit.>). It is completely integrable though linearly ill-posed since the eigenvalues of the dispersion matrix are± iξ(1-ϵ/3ξ^2)^1/2 (we refer to <cit.> for an analysis of the ill-posedness of the nonlinear Kaup and related systems).Ithas explicit solitary waves (see <cit.>). The Boussinesq system (<ref>) can therefore be seen as a (well-posed) regularization of the Kaup-Kupperschmidtsystem. It is not knownto be completely integrable (this is unlikely, see in particular the simulations in Section 7). The full dispersion Boussinesq system hasthe following Hamiltonian structure∂_t[ η; u ]+Jgrad H_ϵ(η,u)=0whereJ=[ 0 ∂_x ∂_y; ∂_x 0 0; ∂_y 0 0 ], H_ϵ(U)=1/2 ∫_ ^2(|ℒ_ϵu|^2+η^2+ϵη|u|^2)dxdy, U=[ η; u ],when d=2 and∂_t[ η; u ]+Jgrad H_ϵ(η,u)=0whereJ=[ 0 ∂_x; ∂_x 0 ]andH_ϵ(η,u)=1/2∫_ (|ℒ_ϵ u|^2+η^2+ϵ u^2η)dx,when d=1.We will see in the next section that thefull dispersion Boussinesq system has mathematical properties that make it doubtful as a relevant water wave model.When surface tension is taken into account, one should replace theoperator ℒ_ϵ^2 by 𝒫_ϵ=(I+βϵ|D|^2)(tanh(√(ϵ)|D|)/√(ϵ)|D|) where again the parameter β>0 measures surface tension (see <cit.>), yielding a more dispersive full dispersion Boussinesq system. When β>1/3, this full dispersion Boussinesq system yields inthe limit√(ϵ)|ξ| → 0 in 𝒫_ϵ, Boussinesq systems of the class a<0, b=c=d=0 (see <cit.>) for which long time (that is on time scales of order 1/ϵ) well-posedness is established in <cit.>, Theorem 4.5.When β<1/3, the full dispersion Boussinesq system reduces in the formal limit√(ϵ)|ξ| → 0 in 𝒫_ϵ, to an ill-posed system, analogous to the Kaup system in dimension 1.AnotherWhitham-Boussinesq system is introduced in<cit.>.[This system seems a bit artificial contrary to(<ref>), (<ref>) which are the exact counter parts of theWhitham equation with respect to the original Boussinesq system] It writes {[η_t+u_x+ϵ (η u)_x=0; u_t+𝒫_ϵη_x+ϵ uu_x=0, ].when d=1 and{[ η_t+ ∇· u+ϵ∇·(η u)=0; u_t+𝒫_ϵ∇η+ϵ/2∇ | u|^2=0, ].when d=2. Inthe limit √(ϵ)|ξ| → 0 in ℒ_ϵ,those systemsreduce formally to{[ η_t+u_x+ϵ (η u)_x=0; u_t+η_x + ϵ(1/3-β)η_xxx+ϵ uu_x=0, ].in dimension oneand to {[ η_t+∇· u+ϵ∇·(η u)=0; u_t+∇η+ ϵ(1/3-β)∇Δη +ϵ/2∇ | u|^2=0, ]. in the two-dimensional case. Both systems are ill-posed when 0≤β <1/3 while when β >1/3 they belong to the class of a=b=d=0, c<0 of classical (a,b,c,d) Boussinesq systems for which existence on time scales of order 1/ϵ is established in <cit.>, Theorems 4.6 and 4.7. The local well-posedness for (<ref>) is established in <cit.>.According to previous theoretical results and numerical simulations,one expects at leastthree different regimes for the Whitham equation (<ref>)(without surface tension):1. Scattering for smallinitial data (see the simulations in<cit.> and in the last Section 7 of the present paper).2. Finite time blow-up (cusplike), see <cit.> for varioussimulations displaying the structure of a shock like blow-up (blow-upof the gradient with bounded sup-norm of the solution). Theoccurrence of such phenomena is rigorously proven in<cit.>for a class of fractional KdV equations and in <cit.> for theWhitham equation itself(see also the numerical simulations in<cit.>) but these phenomena have probably nothing to do with the breaking of real water waves. In fact, when one keeps the small parameter ϵ in the equation (that is not done in <cit.>), the blow-up should occur on time scales much larger than 1/ϵ, the time scale on which the Whitham equation is supposed to approximate the full water wave system via the KdV equation. We refer again to the simulations in Section 7.3.A KdV, long waveregime. In fact it is shown in<cit.>that(<ref>) possesses specific solitary waves, close tothose of KdV and formally stable. In this regime one can expect a KdV likebehavior, namely the soliton resolution, at least on sufficiently long time scales. Those solitary waves and their perturbations are investigated numerically in Section 7 of the present paper.One aim of the present paper is to give further evidenceof the relevance of those conjectures.The dynamics of the Whitham equation with surface tension (<ref>) should be different because of the different behavior of the dispersion at high frequencies. This makes the equation more dispersive and the expected dynamics is that of L^2 critical KdV type equations. In particular the (expected) finite time blow-up should be similar to that of the L^2 critical generalized KdV equationu_t+u^4u_x+u_xxx=0,or of the modified Benjamin-Ono equation (also L^2 critical)u_t+u^2u_x-ℋu_xx=0,where ℋ denotes the Hilbert transform.The rigorous analysis of blow-up for those equations can be found respectively in <cit.> and <cit.>.As aforementioned one aim of the present paper is togive someevidence for the above conjectures via mathematical analysis and mainly by careful numerical simulations.We will also givesome hints on the qualitative behavior of thefull-dispersion Boussinesq systems (<ref>), (<ref>).The paper is organized as follows. Ina first section we give the expected error estimates on the correct time scales,between the solutions of the Whithamand KdV equation for smoothinitial data. Together with the classical results on the KdV approximation of surface water waves (see <cit.>) this implies a rigorous justification of the Whitham equation in the Boussinesq-KdV regime.The next section concerns the Cauchy problem for the Whitham equation with surface tension. Contrary to(<ref>) dispersive estimates can be used here to enlarge the space of resolution to the Cauchy problem.We then comment on the local well-posedness of the Cauchy problem for the Boussinesq full dispersion systems (<ref>) and (<ref>)and of their capillary waves versions. We will see that the well-posedness of the system for pure gravity waves is obtained under a very restrictive, non physical condition (positivity of the wave elevation). When this condition is not satisfied the system is (Hadamard) ill posed and those facts invalidate it as a relevant model for water waves.The presence of surface tension on the other hand prevents theappearance of Hadamard unstable modes when the initial elevation is not positive, the possible unstable modes being bounded then.The two next sections review (and comment on) known results concerning finite time blow-up and solitary wave solutions.Finally the last two sections display many accurate numericalsimulations aiming toillustrate and to detail various properties of the Whitham equations and systems, allowingto propose convincing conjectures on their dynamics.Section 7 is devoted to the Whitham equation. We first construct numerically the solitary waves to the Whitham equation without surface tension and simulate their perturbations for various valuesof the small parameter ϵ.Then we solve numerically the Cauchy problem for Gaussian initial data λexp(-x^2). Depending on the size of ϵ and λ a finite time blow-up may occur, at a time outside the physically relevant time scalesO(1/ϵ) though.The situation is quite different when surface tension is included. Actuallyone shows a finite time blow-up very similar tothe one of the L^2 critical KdV equation u_t+u^4u_x+u_xxx=0. In Section 8 we consider the one-dimensional Whitham-Boussinesq systems, with and without surface tension. We construct numerically solitary waves and study their stability. Then we simulate solutions of the Cauchy problem with initial data satisfying or not the well-posedness condition.§ COMPARISON BETWEEN THE WHITHAM AND KDV EQUATIONS We will compare here the solutions v and u of respectivelythe Whitham equation (<ref>) andthe KdV equation (<ref>) with the same initial data v_0=u_0=ϕ∈ H^∞(ℝ). Here is the main result of this section. Let ϕ∈ H^∞(ℝ). Then, for all j ∈ℕ, j ≥ 0, there exists M_j=M_j(ϕ_H^j+8)>0such that(u-v)(t)_H^j_x≤ M_j ϵ^2t,for all 0 ≤ t ≲ϵ^-1. The implicit constant in the notation t ≲ϵ^-1depends on ϕ_H^2 for j=0 and 1 and on ϕ_H^j+1^-1for j≥ 2.A similar resul holds mutatis mutandi for the periodic problem since we use energy type methods. Recall that such a theorem was proved by Bona, Pritchard and Scott <cit.> for the comparison between BBM and KdV. We also refer to Albert and Bona <cit.> for other comparison results in the long wave regime.§.§ The linear caseIn this section, we compare the linear versions of (<ref>) and (<ref>), i.e. the Airy equation∂_tu+∂_xu+ϵ/6∂_x^3u=0,and the linear Whitham equation∂_tv+l(√(ϵ)D)∂_xv=0 ,wherel(√(ϵ)D)denotes the Fourier multiplier of symbol l(√(ϵ)ξ) defined byl(√(ϵ)ξ)=(tanh√(ϵ)ξ/√(ϵ)ξ)^1/2. Then, we have the following result Let ϕ∈ H^∞(ℝ). Then, for all j ∈ℕ, j ≥ 0, there exists N_j=N_j(ϕ_H^j+7)>0 such that the solutions u of (<ref>) and v of (<ref>) associated to the same initial datum ϕ satisfy∂_x^j(u-v)(t)_L^∞_x≤ N_jϵ^2(1+t) ,for all t ∈ℝ_+.Note that we could also obtain bounds for the differences of u and v in L^2 arguing as in the proof of Theorem <ref>. The following technical result which compares the symbol of l(√(ϵ)D)∂_x and ∂_x+ϵ/6∂_x^3 will be needed below. Assume that √(ϵ)|ξ| ≤ 1. Then,|l(√(ϵ)ξ)ξ -(ξ-ϵ/6ξ^3) | ≲ϵ^2|ξ|^5 . The prooffollows directly from the expansion(tanh(x)/x)^1/2=1-1/6x^2+𝒪(x^4), for |x|<1 . The solutions of (<ref>) and (<ref>) are respectively given by the unitary groupsu(x,t)=e^-t(∂_x+ϵ/6∂_x^3)ϕ(x)=∫_ℝ e^-it(ξ-ϵ/6ξ^3)e^ixξϕ(ξ) dξ,andv(x,t)=e^-tl(√(ϵ)D)∂_xϕ(x)=∫_ℝ e^-itl(√(ϵ)ξ)ξe^ixξϕ(ξ) dξ.Then, it follows from the mean value inequality and Lemma <ref>that |∂_x^j (u(x,t)-v(x,t)) | ≤∫_√(ϵ)|ξ|<1|e^-it(ξ-ϵ/6ξ^3)- e^-itl(√(ϵ)ξ)ξ ||ξ^jϕ(ξ)| dξ+2∫_√(ϵ)|ξ|>1|ξ^jϕ(ξ)|≲∫_√(ϵ)|ξ|<1tϵ^2|ξ|^j+5 |ϕ(ξ)| dξ+∫_√(ϵ)|ξ|>1ϵ^2|ξ|^j+4|ϕ(ξ)| . Then, we deduce from the Cauchy-Schwarz inequality that∂_x^j(u-v)(·,t)_L^∞_x ≲ϵ^2(1+t)ϕ_H^j+7 ,which concludes the proof of the theorem. Note thatfundamental solutions of both the linear Whitham and KdV equations have a quite different behavior.The KdV one is given by the Airy functionG_KdV(x,t)=C/(√(ϵ)t)^1/3Ai( x-t/(√(ϵ)t)^1/3). Set K(x,t)=1/2π∫_ e^ixξe^itξ(tanhξ/ξ)^1/2dξ. [Note that this function is different from the one used in <cit.> to study the properties of the solitary waves of the Whitham equation.]The fundamental solution of the Whitham equation is thusK_ϵ(x,t)=1/√(ϵ)K(x/√(ϵ),t/√(ϵ)).One can easily establish that contrary to the Airy function, K is an unbounded function. Here we follow the analysis in <cit.> in a different setting.Actually, we writeξ(tanh |ξ|/|ξ|)^1/2=( ξ)|ξ|^1/2(1-2/1+e^2|ξ|)^1/2=( ξ)|ξ|^1/2+r(ξ)where r is a continuous function exponentially decaying to zero at ±∞. Using the elementary identitye^iae^ib=(1+2isina/2e^ia/2)e^ib/2,one obtains the decompositionK(x,t)=∫_ e^it(ξ) i|ξ|^1/2e^ixξdξ+∫_ f_t(ξ)e^it(ξ) |ξ|^1/2e^ixξdξ=I_t^1(x)+I_t^2(x),wheref_t(ξ)=2isint r(ξ)/2e^itr(ξ)/2. Since f_t decays exponentially to zero when |ξ| →∞, Riemann-Lebesgue lemma implies that for every t>0, I_t^2 is a continuous function of x decaying to zero at infinity.On the other hand, following the analysis in Section 3 of <cit.> one can prove that I_t^1 decays algebraicallyto 0 when |x|→∞ for t fixed while for instance I_1^1(x)∼ |x|^-3/2exp (iξ/x) when x→ 0 for some non zero ξ.Although we will not use it, we recall for the sake of completeness a dispersive estimate derived in <cit.>, Theorem 2.5,on the free Whitham group (see also <cit.>, Lemma 2.4). Note the difference with the classical L^1-L^∞ estimates on the Airy group.We denote S_ϵ(t)=e^itl(√(ϵ)D)∂_x. There exists C>0 independent of ϵ such that for any ϕ∈𝒮(^2) the following estimates hold :1.|S_ϵ(t)ϕ)|_∞≤C(1/ϵ^1/4(1+t/√(ϵ))^1/8+1/(1+t/√(ϵ))^1/2)(|ϕ|_H^1+|x∂_xϕ|_2).2.|S_ϵ(t)ϕ)|_∞≤C(1/ϵ^3/4(1+t/√(ϵ))^1/3|ϕ|_L^1+1/(1+t/√(ϵ))^1/2(|ϕ|_H^1+|x∂ _xϕ|_2)).3. |S_ϵ(t)ϕ)|_∞≤C(1/ϵ^3/4(1+t/√(ϵ))^1/3|xϕ|_L^2+1/(1+t/√(ϵ))^1/2(|ϕ|_H^1+|x∂ _xϕ|_2)). §.§ A priori estimates on u and v It is well-known that the KdV equation is well-posed in H^∞(ℝ). Moreover, by using the complete integrability of KdV and in particular the fact that KdV possesses an infinite number of conserved quantities, one can get global bounds at the H^j-level for any j ≥ 0. We refer for example to Saut <cit.>, Bona and Smith <cit.> and Bona, Pritchard and Scott <cit.> and the references therein. Let ϕ∈ H^∞(ℝ). Then there exists a unique solution u ∈ C([0,+∞) : H^∞(ℝ)) to (<ref>) such that u(·,0)=ϕ. Moreover, the flow map data-solution ϕ↦ u is continuous from H^∞(ℝ) into C([0,+∞) : H^∞(ℝ)).Furthermore, the following bounds hold true. For every j ≥ 0, there exists C_j=C_j(ϕ_H^j) (note that the C_j can be chosen to be non-increasing functions of their arguments) such thatu(t)_H^j≤ C_j(ϕ_H^j), ∀t ≥ 0 .Being a skew-adjoint perturbation of the Burgers equation, the Whitham equation is also trivially well-posed on H^∞(ℝ) (and also in H^s(ℝ) for s>3/2)[Similarly to the Burgers equation, (see eg <cit.>) the Cauchy problem for the Whitham equation is expected to be ill-posed in H^3/2() (see <cit.> forweaker results on related fractional KdV equations).] but on a time of interval of length 1/ϵ.Let ϕ∈ H^∞(ℝ). Then there exist a positive time T ∼ϵ^-1, a unique solution v ∈ C([0,T] : H^∞(ℝ)) to (<ref>) such that v(·,0)=ϕ. Moreover, the flow map data-solution ϕ↦ v is continuous from H^∞(ℝ) into C([0,T] : H^∞(ℝ)).Furthermore, the following bounds hold true. For every j ∈ℕ, j ≥ 2u(t)_H^j≤ 2ϕ_H^j, ∀0 ≤ t ≲ϵ^-1.Note that the implicit constant in (<ref>) depends on ϕ_H^j as in (<ref>).We only explain how to prove (<ref>). Let v ∈ C([0,T] : H^∞(ℝ)) be a solution of (<ref>) with initial datum v(·,0)=ϕ. Let J^s denote the Bessel potential of order -s, i.e. (J^sf)^∧(ξ)=(1+ξ^2)^s/2f(ξ) .Then, it follows from the Kato-Ponce commutator estimates <cit.> and integrations by parts that d/dtJ^sv _L^2^2 ≤cε ∂_xv_L^∞J^sv _L^2^2 ,for any s>0. If s>3/2, we deduce from the Sobolev embedding H^s-1(ℝ) ↪ L^∞(ℝ) that d/dtJ^sv _L^2^2 ≤cεJ^sv _L^2^3.Hence, we deduce from a classical ODE argument that v(t) _H^s≤ 2ϕ_H^s, if 0 ≤ t ≤ (2cϵϕ_H^s)^-1.This finishes the proof of Proposition <ref>. Finally, we also need a bound on π_ϵ(D)u where u is thesolution of (<ref>) and π_ϵ(D) is the Fourier multiplier of symbol π_ϵ(ξ) defined byπ_ϵ(ξ)= i((ξ-ϵ/6ξ^3)-l(√(ϵ)ξ)ξ) . Let ϕ∈ H^∞(ℝ) and let u be the solutionof (<ref>) evolving from ϕ obtained in Proposition <ref>. Then, for all j ∈ℕ, j ≥ 0, there exists A_j=A_j(ϕ_H^j+8)>0 such that∂_x^jπ_ϵ(D)u(t)_L^2≤ A_jϵ^2 ,for all 0 ≤ t ≤ϵ^-1. We apply the operator ∂_x^jπ_ϵ(D) to (<ref>), multiply the equation by ∂_x^jπ_ϵ(D)uintegrate in the space variable over ℝ and integrate by parts and use the Cauchy-Schwarz inequality to deduce that1/2d/dt∂_x^jπ_ϵ(D)u_L^2^2=-ϵ1/2∫_ℝπ_ϵ(D)∂_x^j+1(u^2) π_ϵ(D)∂_x^ju dx≲ϵ∂_x^j+1π_ϵ(D)(u^2) _L^2∂_x^jπ_ϵ(D)(u)_L^2. Now, we introduce a cut-off function η∈ C_0^∞(ℝ) such that supp(η) ⊂ [-1,1] and η=1 over [-1/2, 1/2] and we define the Fourier multiplier P_< 1/√(ϵ) of symbol η(√(ϵ)·) and P_≥ 1/√(ϵ)=1-P_< 1/√(ϵ). Then, the triangle inequality yields∂_x^j+1π_ϵ (D)(u^2) _L^2≤P_< 1/√(ϵ)∂_x^j+1π_ϵ(D)(u^2) _L^2 +P_≥ 1/√(ϵ)∂_x^j+1π_ϵ(D)(u^2) _L^2 ,and we need to estimate both terms on the right-hand side of (<ref>).To control the first one, we use Plancherel's identity, Lemma <ref> and the fact that H^s(ℝ) is a Banach algebra for s>1/2 to deduce thatP_< 1/√(ϵ)∂_x^j+1π_ϵ(D)(u^2) _L^2 ≲ϵ^2 u^2_H^j+6_x ≲ϵ^2u_H^j+6^2 ,which together with Proposition <ref> gives thatP_< 1/√(ϵ)∂_x^j+1π_ϵ(D)(u^2) _L^2≲ϵ^2 C_j+6(ϕ_H^j+6)^2 . To control the second one, we use again Plancherel's identity to obtain that|P_≥1/√(ϵ)∂_x^j+1π_ϵ(D)(u^2) _L^2^2 ≲∫_|ξ| ≳1/√(ϵ) (1+|ξ|^2)^3|ξ|^2(j+1)|(u^2)^∧(ξ)|^2 dξ.Now, observe that 1 ≲ϵ^4|ξ|^8 on the support of the integral, so that|P_≥ 1/√(ϵ)∂_x^j+1π_ϵ(D)(u^2) _L^2^2≲ϵ^2 u^2_H^j+8≲ϵ^2 u_H^j+8^2 ≲ϵ^2 C_j+8(ϕ_H^j+8)^2,since H^8(ℝ) is a Banach algebra and where we used Proposition <ref> on the last inequality.Then, we deduce gathering (<ref>)–(<ref>) thatd/dt∂_x^jπ_ϵ(D)u_L^2^2 ≲ϵ^3 C_j+8(ϕ_H^j+8)^2∂_x^jπ_ϵ(D)u_L^2.Therefore, we deduce integrating between 0 and t that∂_x^jπ_ϵ(D)u(t)_L^2≤∂_x^jπ_ϵ(D)ϕ_L^2+C_j+8(ϕ_H^j+8)^2 ϵ^3t . Finally, arguing as above, we get from Lemma <ref> and Plancherel's identity that∂_x^jπ_ϵ(D)ϕ_L^2 ≤P_< 1/√(ϵ)∂_x^jπ_ϵ(D)ϕ_L^2+P_≥ 1/√(ϵ)∂_x^jπ_ϵ(D)ϕ_L^2≲ϵ^2ϕ_H^j+5+ϵ^2ϕ_H^j+7.Then, we deduce combining (<ref>) and (<ref>) that ∂_x^jπ_ϵ(D)u(t)_L^2≤ A_j(ϕ_H^j+8) ϵ^2(1+ϵ t) ≤2A_j(ϕ_H^j+8) ϵ^2 ,as soon as 0 ≤ t ≤ t/ϵ.§.§ Proof of Theorem <ref>Let z=u-v and j ∈ℕ, j ≥ 0. It is deduced from equations (<ref>) and (<ref>) that z solves the initial value problem∂_tz+l(√(ϵ)D)∂_xz+π_ϵ(D)u+ϵ v∂_xz-ϵ z∂_xu=0 w(·,0)=0 ,where π_ϵ(D) is the Fourier multiplier of symbol π_ϵ(ξ) defined in (<ref>).Note that the equation of z is well defined for 0<t ≲ 1/ϵ.Differentiate j times, multiply the equation (<ref>) by ∂_x^jz, integrate in the space variable over ℝ and integrate by parts to deduce that1/2d/dt∫_ℝ (∂_x^jz)^2dx = -∫_ℝ∂_x^j π_ϵ(D)u∂_x^jz dx -ϵ∫_ℝ∂_x^j(v∂_xz) ∂_x^jz dx -ϵ∫_ℝ∂_x^j(z∂_xu)∂_x^j z dx .Thus, the Cauchy-Schwarz inequality, the Leibniz rule and integration by parts yieldd/dt∂_x^jz_L^2^2 ≲∂_x^jπ_ϵ(D)u_L^2∂_x^jz_L^2+ϵ( ∂_xu_L^∞+ ∂_xv_L^∞)∂_x^jz_L^2^2in the cases j=0 and j=1 and d/dt∂_x^jz_L^2^2 ≲∂_x^jπ_ϵ(D)u_L^2∂_x^jz_L^2+ϵ∑_k=1^j( ∂_x^ku_L^∞+ ∂_x^kv_L^∞)z_H^j^2in the cases where j ≥ 2.Now, on the one hand, we get by using the Sobolev embedding and Propositions <ref> and <ref> that∑_k=1^j(∂_x^ku(·,t)_L^∞ +∂_x^kv(·,t)_L^∞)≲u(·,t)_H^κ+v(·,t)_H^κ≲ K_1(ϕ_H^κ)=:K_1 , for all 0 ≤ t ≲ϵ^-1 (where the implicit constant depends on the ·_H^κ norm of ϕ as explained in the proof of proposition <ref>). Here, we used the notation κ=2 for j=0 or 1 and κ=j+1 for j ≥ 2. On the other other hand, by applying Proposition <ref>, it follows that∂_x^jπ_ϵ(D)u_L^2≤ϵ^2A_j+8(ϕ_H^j+8)=:ϵ^2K_2 ,for all 0 ≤ t ≤ϵ^-1.Therefore, we deduce gathering (<ref>), (<ref>), (<ref>) and using Gronwall's inequality thatz(t)_H^j≤K_2e^K_1ϵ t-1/K_1ϵϵ^2 ≤ K_2e^K_1ϵ^2 t,whenever 0≤ t ≲ϵ^-1, which concludes the proof of Theorem <ref>. A similar comparison result can be established between the Whitham equation with surface tension (<ref>) and the KdV equation (<ref>) by using the expansion (1+β x^2)^1/2(tanh x/x)^1/2=1-x^2/6(1-3β)+O(x^4). § THE CAUCHY PROBLEM FOR THE CAPILLARY WHITHAM EQUATIONAs was already noticed the Cauchy problem for the Whitham equations (<ref>) and (<ref>) is trivially well-posed in H^s(),s>3/2 on time scales of order ϵ^-1.This result can be improved (by enlarging the space of resolution) for the capillary Whitham equation (<ref>) by using its dispersive properties. Actually one gets, see<cit.> where general fractional KdV (fKdV) equations are considered: Assume that s >21/16.Then, for every u_0 ∈ H^s(ℝ), there exist a positive time T_ϵ =T_ϵ(u_0_H^s)=0(1/√(ϵ)) (which can be chosen as a non-increasing function of its argument), and a unique solution u to (<ref>) satisfying u(·,0)=u_0 such thatu ∈C([0,T_ϵ]:H^s(ℝ)) and∂_xu ∈ L^1([0,T_ϵ]:L^∞(ℝ)).Moreover, for any 0<T'<T_ϵ, there exists a neighborhood 𝒰 of u_0 in H^s(ℝ) such that the flow map data-solutionS^s_T': 𝒰⟶ C([0,T'];H^s(ℝ)) , u_0 ⟼ u,is continuous. 1. Since the value α =1/2 is L^2 critical for the fKdV equation we conjecture that the Cauchy problem for (<ref>) is globally well posed for initial data in the energy space H^1/4() having a sufficiently small L^2 norm.2. This result was recently improved in <cit.> where the local well-posedness is obtained for s>9/8.3. For large enoughinitial data, one expects a finite time blow-up silmilar to that of the L^2-critical generalized KdV equation or to the cubic-Benjamin-Ono equation proven respectively in <cit.> and <cit.>, as displayed in the simulations of Section 7.Using the same energy methods one can prove a result similar to that of Theorem <ref> between solutions of (<ref>) and (<ref>). § THE CAUCHY PROBLEM FOR THE BOUSSINESQ-WHITHAM SYSTEMSIn both cases (without or with surface tension) the Boussinesq-Whitham systems are linearly well-posed since the linearized systems write, say in dimension one∂_t[ η; u ]+∂_x A[ η; u ]=0where the Fourier transform of the matrix operator A has real eigenvaluesλ_±(ξ)=±(tanh√(ϵ)|ξ|/√(ϵ)|ξ|)^1/2in absence of surface tension andλ_±(ξ)=± (1+ϵβ|ξ|^2)^1/2(tanh√(ϵ)|ξ|/√(ϵ)|ξ|)^1/2in presence of surface tension.The theory of the Cauchy problem for the nonlinear system without surface tension is relatively straightforward and we focus on the case of space dimension one. Since the operator ℒ_ϵ^2 ∂_x has order zero,(<ref>) is an order zero perturbation of a first order system, more precisely it writes ∂_t U+𝒜_ϵ(U)∂_x U+ℋ_ϵ U=0where U=[ η; u ], 𝒜_ϵ(U)= [ ϵ uϵη; 1 ϵ u ], ℋ_ϵ U=[ ℒ_ϵ^2 ∂_x u; 0 ].The matrix 𝒜_ϵ(U) has eigenvalues λ satisfying(ϵ u-λ)^2=ϵηso that the system ∂_t U+𝒜_ϵ(U)∂_x U=0 is hyperbolic in the regions where η >0,and Hadamard ill-posed if this condition is violated. Actually, under the condition η≥ C_0>0,the system can be symmetrized via the positive definite symmetrizer S(U)=[10;0 ϵη ].The same process can be applied to (<ref>) yielding a symmetric hyperbolic system perturbed by the order zero operator ℋ_ϵ namelyS(U)U_t+ϵ[uη;η ϵ uη ] U_x+ℋ_ϵ U=0. Setting ζ=η+N_0 where N_0>0 is a fixed constraint, the standard theory of symmetrizable hyperbolic systems (see eg <cit.>)imply the local well-posedness of the Cauchy problem for (ζ,u) ∈ H^s()× H^s(),s>3/2 for initial data ζ_0 such that η_0 is sufficiently small. Similar arguments in the two-dimensional case yield well-posedness inH^s(^2)× H^s(^2),s>2.Notethat the condition (<ref>) implies that the wave is always of elevation which seems to be the case of solitary wave solutions (see Section 7). On the other hand, condition (<ref>)implies that the wavecannot tend to zero at infinity and thus the perturbations of the solitary wave solutions are excluded from the range of well-posed initial data. Those facts arenot physically realistic, invaliding the Boussinesq-Whitham system as a relevant water waves model.We do not know of a local well-posedness result for (<ref>) or (<ref>) under the assumption (<ref>).To illustrate the (Hadamard) instabilities occurring when condition (<ref>) is violated, let consider the linearization around (η,u)=(-c,0) where c is a positive constant. The linearized system has eigenvalues λ_± whereλ_±=± iξ(tanh√(ϵ)ξ/√(ϵ)ξ-cϵ )^1/2. All modes are unstable when ϵ >1/c while when ϵ<1/c all modes corresponding to |ξ|≥x_c,ϵ/√(ϵ) are unstable where x_c,ϵ is the unique positive solution oftanh x/x=cϵ. Note that x_c,ϵ→∞ asϵ→ 0.Thus theset of stable modes get larger and largerwhen for a fixed c, the small parameter ϵ tends to 0. When the nonlocal term is removed, all modes are unstable, for any c>0. The effect of dispersion is thus here to create a range of linearly stable modes when c<1, getting larger and larger when ϵ→ 0.Things aredifferent for Boussinesq-Whitham systems in presence of surface tension since dispersive effects play a significant role here. In fact, surface tension prevents the appearance of (Hadamard) unstable modes for the linearized system at (η,u)=(-c,0). Actually the linearized systemeigenvalues are now λ_±=± iξ((1+βϵξ^2)tanh√(ϵ)ξ/√(ϵ)ξ-cϵ )^1/2. Whatever the values of c and ϵ the possible unstable modes are bounded and there are no more Hadamard instabilities. The effect of surface tension is thus to suppress the Hadamard instabilities. A convenient way to get an idea of the nature of the full systemis toderivean equivalent system by diagonalizing the linear part of the system. More precisely, we define A(ξ)=iξ[0 (1+βϵξ^2)tanh√(ϵ)|ξ|/√(ϵ)|ξ|;10;]the Fourier transform of the dispersion matrix with eigenvalues ± iξ(1+βϵξ^2)^1/2 (tanh√(ϵ)|ξ|/√(ϵ)|ξ| )^1/2 .SettingU=[ η; u ]andW=[ ζ; v ]=P^-1U, P^-1=1/2[ ℒ̃_̃ϵ̃^-1 1; ℒ̃_̃ϵ̃^-1-1 ],P= [ ℒ̃_̃ϵ̃ ℒ̃_̃ϵ̃;1 -1 ]the linear part ofthe system is diagonalized asW_t+∂_xDW=0,whereD=[ ℒ̃_̃ϵ̃0;0 - ℒ̃_̃ϵ̃ ],that is two dispersive equations of order 1/2. In the W variable the complete system writes now∂ _t W+∂_x DW+ϵ N_ϵ(W)=0,whereN_ϵ(W)=[ ℒ_ϵ^-1[(ζ -v)ℒ_ϵ∂ _x(ζ+v))+ℒ_ϵ(ζ+v)∂_x(ζ-v)]+(ζ-v)∂_x (ζ-v);ℒ_ϵ^-1[(ζ -v)ℒ_ϵ∂ _x(ζ+v))+ℒ_ϵ(ζ+v)∂_x(ζ-v)]-(ζ-v)∂_x(ζ-v) ].While the linear part is of order 1/2, the nonlinear terms are formally of order 1. However the structure of thenonlocal nonlinear termsmight prevent theimplementation ofthe methods used in <cit.>for fractional type KdV equations to obtain the local well-posedness of the Cauchy problem on time scales of order 1/√(ϵ). On the other hand one might think of proving the local well-posedness by using an elementary energy method, reminiscent of the oneused in <cit.> for the (-1,0,0,0) Boussinesq system (see<cit.> Theorem 3.1) or a symmetrization technique as in<cit.> Theorem 4.5. However those methods do not seem to worksincea control of the H^s× H^s+1/2, s>1/2 norm on (η,u) is not enough to control the nonlinear terms (in case of the (-1,0,0,0) Boussinesq system one gets a controlon the H^s× H^s+1 norm). § FINITE TIME BLOW-UP FOR THE WHITHAM EQUATION It has been established in <cit.> that solutions of the Whithamequation (<ref>) may blow-up in finite time, the blow-up being "shock-like", that is blow-up of the gradient with bounded L^∞ norm. The numerical simulations in <cit.> display the cusp nature of the singularity depending on the sign of the initial data. On the other hand those works consider the Whitham equation (<ref>) with ϵ=1, thus they do not address the long-wave, KdV limit. In particular the crucial dependence of the blow-up time with respect to ϵ is not obtained. Note that this blow-up time should be larger that 1/ϵ, the time scale on which the KdV and Whitham equations are close and asymptotic models for the propagation of long, weakly nonlinear surface water waves for which no singularities are expected.Note also that a quite different finite time blow-up is expected for the Whitham equation with surface tension (<ref>) that cannot be anymore viewed as a "weak" dispersive perturbation of the Burgers equation. In fact,as was already noticed, it is reminiscent for large frequencies of the L^2 critical fractional KdV equation u_t+u_x+uu_x-D^1/2u_x=0for which a finite time blow-up à la Martel-Merle <cit.> is expected (but not yet proven), see the numerical simulations in <cit.>. § SOLITARY WAVES §.§ Whitham without surface tensionWe will focus here on non periodic solitary wave solutions of the Whitham equation (<ref>), that is solutions of the formu(x-ct). We refer to <cit.> for interesting (theoretical and numerical) studies on periodic traveling waves. In particular the existence of a global bifurcation branch of 2π- periodic smooth traveling wave solutions is established in <cit.>.The existence of solitary wave solutions to the Whitham equation (<ref>) decaying to zero to infinity and close to the KdV soliton has been proven in <cit.> by variational methods. Symmetry and decay properties of Whitham solitary waves are established in <cit.>. The analysis in <cit.> is made on (<ref>) with ϵ =1 under the scaling u(x)=ϵ^α w(ϵ^β x) where 2α-β=1 so that 1/2∫_ u^2=ϵ. An interesting issue is that of the transverse stability of the Whitham solitary wave in the framework of the full dispersion KP equation (<ref>). Considering the similar problem for the usual KP I/II equations (see <cit.> and the references therein) one can conjecture thatthe Whitham solitary wave is transversally stable when the surface tension parameter β is less than 1/3 and unstable otherwise. We intend to go back to this issue in a subsequent paper.We do not know of any rigorous result on the existence of solitary wave solutions tothe full dispersionBoussinesq system. The following computations give some evidence to the existence of solitary wave solutions "close" to the KdV soliton.Recall that the system readsη_t+T^2_ϵu_x+ϵ (η u)_x=0, u_t+η_x+ϵ/2(u^2)_x=0 ,where T̂_ϵ=(tanh(k√(ϵ))/(k√(ϵ)))^1/2. In the formal limit ϵ→ 0, the system reduces to the wave equation.We are here interested in localized travelling wave solutions to the system (<ref>). Putting u(x,t)=U(x-ct) and η(x,t)=N(x-ct), we find after integration-cN+T^2_ϵU+ϵ NU=0, N =cU-ϵ/2U^2.Eliminating N from the first equation of (<ref>) viathe second, we find(T^2_ϵ-c^2)U+3ϵ c/2U^2-ϵ^2/2U^3=0. Writing c = 1+αϵ with α>0, we get, by performing a formal expansion in ϵ and neglecting terms oforder ϵ or higher in (<ref>)1/3U”-2α U+3/2U^2=0 ,which gives after integration(U')^2=6α U^2-3U^3which has the KdV solitonU= 2α^2(√(3α/2)(x-ct))as a solution. Note that for a solution (N,U) with a given velocity c of the aboveequations,(-N, U) provides a solution for the same system with velocity -c. Similarly, a solitary wave u(x,t)=U(x-ct) of the Whitham equation (<ref>) satisfies the equationϵ/2U^2+(T_ϵ-c)U=0. Following the method used in <cit.> one can prove that solutions of (<ref>) that tend to 0 as |x|→∞ decay to 0 exponentially in the sense that for some ν >0,e^ν | · |ϕ∈ L^1()∩ L^∞(). Writing c=1+δϵ where δ>0 is a constant independent of ϵ, we get, neglecting termsoforder ϵ or higher in the formal expansion in ϵ,-1/2U^2-1/6U”+δ U=0,which gives after integration( U')^2=-2U^3+6δ U^2.Thus we find again the KdV solitonU=3δ^2(√(3δ/2)(x-ct)).The above formal considerations suggest that one could obtain the existence of solitary wave solutions to both the Whitham equation and the Whitham system by perturbations arguments starting from the KdV soliton. In particular in the case of the Whitham equation this would give an alternative proof to the one in <cit.>.§.§ Whitham with surface tension The situation is quite different for the Whitham equation with surface tension (<ref>).A solitary wave solution u(x,t)=U(x-ct) of (<ref>) should satisfy the equation ϵ/2U^2+(T̃_ϵ-c)U=0 where T̃_ϵ is defined by (<ref>).The existence of localized non trivial solutions of (<ref>) results from Theorem 2.1 in <cit.>. Since (<ref>) (as the fKdV equation with α=1/2) is L^2 critical one expects the instability of those solitary waves by blow-up, similarly to the generalized KdV equation with p=4, see <cit.> or to the modified Benjamin-Ono equation (<cit.>). On the other hand, a solitary wave solution (η(x-ct), u(x-ct)) of the full-dispersion Boussinesq system should satisfy {[ -cη+𝒫_ϵ ^2u_x+ϵη u=0; -cu+ ηϵ/2 u^2=0, ]. where𝒫_ϵ=(I+βϵ |D|^2)(tanh(√(ϵ)|D| )/√(ϵ)|D| ).Eliminating η from the second equation in (<ref>) yields the single equation -c^2 u+𝒫_ϵ u +3/2c^2 u^2-ϵ^2/2u^3=0. One cannot apply directly the result of <cit.> to prove the existence of non trivial solutions to (<ref>) since the nonlinear term is not homogeneous.We plan to come back to this issue in a next paper.§ NUMERICAL SIMULATIONS FOR THE WHITHAM EQUATION The simulations in this section will illustrate several aspects of the dynamics of the Whitham equation alluded to in the previous sections and will also give evidence for some facts not yet rigorously proven. §.§ Numerical construction of solitary waves for the WhithamequationIn this subsection we construct numerically solitary wave solutions tothe Whitham equations, i.e., localized solutions of (<ref>).To this end we use the same technique as in <cit.>: since thewanted solutions are expected to be rapidly decreasing, it isconvenient to formulate the problem as an essentially periodic onewith a sufficiently large period such that the solutions and thenumerically interesting derivatives vanish with numerical accuracy atthe domain boundaries ± Lπ (we typically choose L=5 in thefollowing). The Fourier transforms are approximated in standard wayvia discrete Fourier transforms conveniently implemented via the FastFourier transform (FFT). With this approach equation (<ref>) is approximated via afinite dimensional (we use N Fourier modes) nonlinear equationsystem formally written as ℱ(Û)=0 ,where Û denotes the discrete Fourier transform of U. Thisnonlinear systemis solved with a Newton-Krylov iteration. This means theaction of the inverse of the Jacobian of ℱ onℱ in the standard Newton iteration is determinediteratively via GMRES <cit.>. As the initial iterate wechoose the KdV soliton written as U=3δ^2(√(3δ/2)(x-ct)). Note that the reality of U has to beenforced during the iteration. As an example we study the case ϵ=0.01 and vary the constantδ. This is mainly done in order to have for small δ aninitial iterate close to the Whitham soliton. For larger δ, weuse as an initial iterate the solution for a slightly smaller value ofδ constructed before. Since we vary δ,the choice of ϵ is, however, no restriction for the general case since the onlyimportant quantity is the velocity c. It is to be expected thatthere will be solitons for c∼ 1 because of the analogy to KdV,but that for larger values of c the dispersion of the Whithamequation becomes in contrast to the KdV equation too weak tocompensate the nonlinearity. Thus we expect that in contrast to KdVthere might be an upper limit of c for solitons to the Whithamequation. In fact we obtain the situation shown inFig. <ref>. For c=1.01,1.02,1.05,1.1,1.2 we useN=2^14 Fourier modes, for c=1.22 N=2^16. It can be seenthat maxima of the solution grow as expected, but that the solitonsbecome more peakedandmore compressed comparedto the corresponding KdV case. In fact we did not succeed toconstruct solitons for c much greater than 1.22. The failure of an iteration toconverge obviously does not imply that there will be no longer asoliton, it just means that the numerical approach no longer can beused. However the fact that the iteration did not converge even forlarger resolutions, for initial iterates being the numerical solution fora slightly smaller δ and for an iteration with relaxation is astrong indication that there might not be a Whitham solitary waves for much largervelocities. There are additional differences between Whitham and KdVsolitons. The mass (the L^2 norm) of the KdV soliton(<ref>) is given by M=4√(6)δ^3/2, the mass ofthe Whitham solitons can be seen on the left ofFig. <ref>. As expected the mass is identical forsmall δ to the one of the KdV soliton. But for larger speedsc, the mass grows more slowly. The energy of the KdV soliton(<ref>) is proportional toδ^5/2, the energy of theWhitham soliton can be seen on the right ofFig. <ref>. The energy curve appears to flatten nearthe reachable maximal velocities.§.§ Numerical study ofperturbed Whitham solitonsIn this subsection, we study the time evolution of perturbations of theWhitham solitons constructed above. To this end wewrite the Whitham equation for the Fourier transform in a commovingframe, i.e.,û_t+ϵ/2iku^2+(T̂_ϵ-c)ikû=0 ,and approximate the Fourier transform via a discrete Fourier transform asbefore. For the time integration we use as in <cit.> an implicitRunge-Kutta method of fourth order with a fixed point iteration, see<cit.> for details. The accuracy of the solution is controlledvia the Fourier coefficients which should decrease exponentially tomachine precision (we work here in double precision which allows amaximal accuracy of 10^-16) for smooth functions and via theconservation of the energyE =∫_ℝ^[1/2(√(T_ϵ-c)u)^2- ϵ/6u^3]dx .Due to unavoidable numerical errors, this energy does numericallyevolve with time, but thus provides an estimate of the numericalerror. As shown in <cit.>, conserved quantities typicallyoverestimate the L^∞ error (the maximum of the differencebetween numerical and exact solution) by 1-2 orders of magnitude.As a first test of the code, we take the numerically constructedsoliton with ϵ=0.01 and c=1.2 of the previous subsection asinitial data. In the used commoving frame, the solitons shouldcorrespond to a stationary solution. We use N_t=10^4 time stepsfor t∈[0,10] and find that the difference between the evolvedsolution and the initial data is of the order of 10^-13, theorder of the error with which the equation (<ref>) had beensolved in the previous section. The numerically computed relativeenergy 1-E(t)/E(0) is of the order of 10^-15.5. We first consider as initial data the soliton with ϵ=0.01c=1.2 plus a Gaussian perturbationwhich isa perturbation of the order of 1% of the maximum ofthe soliton. We use N_t=2*10^4 time steps for t∈[0,20]. TheFourier coefficients in this case decrease to the order of 10^-14during the whole computation, and the relative energy is conserved tothe order better than 10^-14. In this case there appears aslightly faster soliton travelling to the right in the commovingframe, as can be seen in Fig. <ref>. But inaddition to radiation travelling to the left, a slowersoliton travelling to the left in the frame commoving with theunperturbed soliton seems to emanate from the initial data. This iseven more visible in the plot on the right ofFig. <ref> where a close up of the smaller solitonis shown for t=20. The soliton appears to be stable, theperturbation leads to a slightly larger soliton and possibly asmaller soliton plus radiation.The situation changesif the same soliton as inFig. <ref> is considered, but this time with aperturbation 3exp(-x^2) corresponding to roughly 4% of themaximum of the soliton. We use N_t=10^4 time steps fort∈[0,8]. In this case the solution appears to develop a cusp. Weshow in Fig. <ref> the solution and the modulusof its Fourier coefficients at t=7.52. It can be seen that thesolution clearly runs out of resolution in Fourier space, already att∼ 7, the Fourier coefficients decrease only to 10^-2. In Fig. <ref> it can be seen that theL^∞ norm of the solution grows clearly beyond the initialdata, but it is not clear whether it diverges. The same behavior canbe seen for the L^2 norm of u_x on the right of the samefigure. A fitof the Fourier coefficients as in <cit.> reveals that asingularity appears to approach the real axis in the complexx-plane. The idea of this approach is that an essential singularityof the form (x-x_s)^μ, μ∉ℤ, in the complexplane for a function u leads to an asymptotic behavior of theFourier transform|û(k)| ∝1/k^μ+1 e^-δ k,k ≫ 1,where δ= x_s. For t=7.547, this singularity seems to hit the realaxis (δ→0) indicating the formation of a cusp (the fittedcoefficient μ ofthe essential singularity is positive as in <cit.>). Thus thereseems to be a hyperbolic blow-up in this case. The soliton forc=1.2 appears therefore to beunstable against blow-up in the form of a cuspfor sufficient size of the perturbation.Since the solitons of the Whitham equation are rapidly decreasing, itis possible to study soliton interactions. InFig. <ref>, we show the solitons with c=1.05 andc=1.1, the latter being centered at x=-4 by simply multiplyingits Fourier coefficients by e^4ik. The sum of the solutions givestwo soliton initial data since they both vanish with numericalprecision where the other takes values above the numerical error. Onthe left of Fig. <ref> we show the solution to theWhitham equation for t∈[0,100]. The modulus of the Fourier coefficients of thesolution decreases to 10^-12 during the whole computation. It canbe seen that the soliton interaction resembles the KdV two-soliton:the solitons have almost the same shape after the collision, there isjust a phase shift (note that the computation is performed in a framecommoving with the smaller soliton at c=1.05). The close-up of thesolution at the final time on the right ofFig. <ref> reveals, however, that this is not anexact two-soliton since there is dispersive radiation propagating tothe left. This indicates that the Whitham equation is as expected notintegrable.§.§ Numerical study of Gaussian initial data for different valuesof ϵIn this section we study numerically the time evolution of Gaussianinitial data u(x,0)=10exp(-x^2)for different values of the parameter ϵ. Since we areinterested in studying the solutions on time scales of order1/ϵ, weintroduce the rescaled time τ=tϵ in which the Whithamequation (<ref>) readsu_τ+ uu_x+1/ϵ(T_ϵ-1)u_x=0 ,where we have useda frame commoving with velocity 1. Thisequation is solved with the same numerical approach as in theprevious subsection. For ϵ=1 we use N=2^14 Fourier modes and N_t=10^4time steps for τ∈[0,0.2]. The solution appears to develop a cuspfor τ>0.1175 as can be seen in Fig. <ref> on the left.The Fourier coefficients on the right of the same figure indicatea loss of resolution. A fitting of the Fourier coefficients to(<ref>) indicatesthat for τ_c=0.1175, the parameter δ indicates the distancebetween a singularity in the complex x-plane vanishes, and thecorresponding parameter μ∼0.3635 indicating the formation of acusp proportional to (x-x_s)^1/3.Note that the critical time τ_c above does not indicate theexact blow-up time, this just indicates that the singularity in thecomplex plane leading to the asymptotic behavior (<ref>) of theFourier coefficients is too close to the axis to be numericallydistinguished from 0. If one considers the same initial data as inFig. <ref> for smaller values of ϵ, oneobtains the critical times and exponents shown in the table in Fig.<ref>. It can be seen that the critical timesτ_c are always greater than the τ_c forϵ=1. In fact the critical times appear to grow withdecreasing ϵ. This implies, however, that the solutionsdo not have a blow-up on time scales of order 1/ϵ(recall that τ=tϵ).For values of ϵ smaller than 0.04, there does not appearto be a blow-up at all. In Fig. <ref> we show thesolution to the same initial data as in Fig. <ref> forϵ=0.01. We use N=2^12 Fourier modes for x∈L[-π,π] with L=10 and N_t=10^4 time steps forτ∈[0,2]. It can be seen that two stable solitons appear whichgives support to the soliton resolution conjecture in the context ofthe Whitham equation: stable solitons seem to appear in the long timebehavior of the solution. In addition there is the usual dispersiveradiation also known from KdV solutions.The L^∞ norm ofthe solution on the right ofFig. <ref> also seems to correspond to a soliton withspeed greater than 1 (we are in a commoving frame with c=1). Neither theFourier coefficients nor the L^2 norm ofu_x indicate theformation of a singularity. The latter again indicatesthe appearance of a soliton.The solution in Fig.<ref> for small ϵbecomes obviously closer to the solution to the KdV equationu_τ+uu_x+u_xxx/6, which can be obtained from the Whithamequation in the formal limit ϵ→0 for the same initialdata. This KdV solution is shown inFig.<ref> §.§ Numerical study of the Whitham equation with surfacetensionIn this subsection we will study the Whitham equation with surfacetension (<ref>) for similar situations as above for theWhitham equation without surface tension: solitons, their stabilityand general initial data in the Schwartz class. The goal is mainly tohighlight differences with respect to the case without surfacetension.Arnesen <cit.> showed that solitons exist in the case ofnon-vanishing surface tension for all values of ϵ. For smallϵ, a similar calculation as for the Whitham equation in(<ref>) leads to the result that the Whitham soliton shouldbe in some sense close to the KdV soliton given byU=3δ^2(√(3δ/2(1-3β)(x-ct)).This means that for β<1/3, the solitons should have positiveamplitude, whereas they are depression waves (negative amplitude) forβ>1/3. Numerically we construct the solutions as before. Sincethe behavior for the high wave numbers of the operatorℒ̂_ϵ (<ref>) is different (linear in|k|) than forthe case β=0 (constant in |k|), it is useful to divide byℒ̂_ϵ in the equation for the solitary wavein Fourier space. This is straight forward because theoperator is diagonal in Fourier space. For small ϵ andδ the solitons are close to the KdVsoliton. All figures arecreated with N=2^12 Fourier modes. We first address the case β=0.1 and δ=2 which can be seenon the left of Fig. <ref>. The solitary wave forthe Whitham equation with surface tension is slightly more peakedthan the KdV soliton in red in this case. If we propagate the exactsolitary wave in a commoving frame by the time evolution code (as in <cit.> it isuseful to apply a simplified Newton iteration instead of a fixedpoint iteration), the difference between numerical solution andinitial data for N_t=10^4 time steps forτ<1 is of the order of the numerical error for the solitary wave(10^-12). If we perturb this solutionslightly by considering the initial data u(x,0)=0.99 U(x,c=1.02),i.e., initial data close to the solitary wave with a slightly smallermass, one can see on the right of Fig. <ref> that thesoliton is dispersed to infinity. In contrast to the case withoutsurface tension, there are no visible oscillations, the initial pulsejust gets broader and decreases in height.If we consider for the same values of ϵ and β as inFig. <ref> the initial data u(x,0)=1.01U(x,c=1.02),i.e., initial data in the vicinity of the solitary wave with slightlylarger mass, one can see in Fig. <ref> that thesolution has an L^∞ blow-up in finite time which is clearfrom the right figure. In fact the divergence of the L^∞norm after τ=5.96, for which the solution is shown on the left, isso rapid that the fitted distance (<ref>) changes from positiveto negative values too rapidly to obtain a sensible result for μ(the Fourier coefficients deteriorate too rapidly). Thus one finds thesame behavior for almost solitary initial data as for the L^2critical gKdV equation <cit.>: exact solitary initial data are not affected,initial data with smaller mass will be dispersed to infinity, andinitial data with larger mass lead to an L^∞ blow-up. For β=1 and δ=-20 the soliton has negative amplitude and can be seenon the left of Fig. <ref>. The solitary wave forthe Whitham equation with surface tension is again slightly more peakedthan the KdV soliton in red.If we perturb this solution by considering the initial data u(x,0)=0.99 U(x,c=0.8),one can see on the right of Fig. <ref> that thesoliton is once more dispersed to infinity, this time to the right,but again without visible oscillations.If we consider for the same values of ϵ and β as inFig. <ref> the initial data u(x,0)=1.01U(x,c=0.8),one can see in Fig. <ref> that thesolution has again an L^∞ blow-up in finite time which is clearfrom the right figure. The divergence of the L^∞norm after τ=4.75, for which the solution is shown on the left, isso rapid that the iteration stops converging. Once more one finds thesame behavior for almost solitary initial data as for the L^2critical gKdV equation. To address the case of general initial data in the Schwartz class, weconsider the same initial data as in the case β=0 in Fig. <ref>. Thenumerical results indicate that there is a hyperbolic blow-up in thiscase. In Fig. <ref> we show in contrast the correspondingsituation for β=1. We use N=2^14 Fourier modes onx∈[-2π,2π] and N_t=50000 time steps for τ∈[0,0.2].For small times, the dynamics ofBurgers' equation is again dominant leading to a steepening of theright front towards the formation of a shock. Close to a potentialgradient catastrophe, the stronger dispersion (compared to theWhitham equation with β=0) takes over and appears to generate adispersive shock as known from the KdV equation. However, as becomesclear from the right figure in Fig. <ref>,instead of a dispersive shock wave an L^∞ blow-up as forgeneralized KdV equations is observed, see e.g. <cit.> andreferences therein. A fitting of the Fourier coefficients inFig. <ref> according to (<ref>)indicates indeed that a singularity in the complex plane approachesthe real axis for τ≈ 0.1648. The critical exponent is foundto be μ=-0.5518 which confirms an L^∞ blow-up. This is inaccordance with the L^∞ norm of the solution shown inFig. <ref> on the right. § NUMERICAL SIMULATIONS FOR THE BOUSSINESQ SYSTEMIn this section we numerically construct solitary waves for theBoussinesq system and study their stability as well as solutions tomore general initialdata. §.§ Numerical construction of solitary waves for theBoussinesq system As in the case of solitary waves for the Whitham equation, weconstruct the solitary waves by solving equation (<ref>)with FFT techniques in Fourier space with a Newton-GMRES iteration.We choose again ϵ=0.01 and use the KdV soliton (<ref>) as initialiterate for small α. For larger values of α (whichimplies larger values of the speed c=1+αϵ), we use the numerical solution to(<ref>) for a slightly smaller α as an initialiterate. The iterations are carried out on the intervalx∈[-5π,5π] with N=2^14 Fourier modes. For c=1.16 we use2^16 Fourier modes, but evenincreasing c in smallincrements, we do not observe convergence of the iteration. Again thisdoes not prove the non-existence of solitary waves to the system(<ref>) at higher speeds, but is an indication that there mightbe an upper limit to the speed of the travelling waves for thisequation. In Fig. <ref>, we show on the left the function U fordifferent values of c. It can be seen that with increasing speed,the solitons become again more peaked and localized. Note that allshown solutions are numerically well resolved in the sense that themodulus of the Fourier coefficients decreases to machine precisionfor large wave numbers. On the right of Fig. <ref> we showthe corresponding functions N computed for given U via(<ref>). Due to the term proportional to ϵ in(<ref>), the functions N are less peaked than thecorresponding function U for large c. §.§ Numerical study of perturbed solitary wavesIn this section we study numerically solutions to the Cauchy problem for theBoussinesq system (<ref>) for several examples. We use the sameapproach as for the Whitham equation, a Fourier spectral method inx and an implicit Runge-Kutta method of fourth order with a fixpoint iteration in t. Since the system can be seen as a dispersiveregularization of the ill-posed Kaup system and since it is ill posedfor negative η, Krasny filtering <cit.> has to be usedin order to stabilize the code as in the case of for instance thefocusing nonlinear Schrödinger equation in the semiclassical limit,see e.g. <cit.>; this means the Fourier coefficients with amodulus smaller than some threshold, typically 10^-12, are putequal to zero. The accuracy of the solution is controlledvia the decrease of the modulus of the Fourier coefficients for largewave numbers and the conservation of the numerically computed energy. If we choose the numerically constructed soliton solution asinitial data in a commoving frame, the solution is for c=1.05numerically evolved for t∈[0,2] with an accuracy of 10^-14(L^∞ norm of the difference between numerical solution andinitial data). This shows the accuracy both of the time evolutioncode and the code for the solitons. Note that a rescaling of the timeas in (<ref>) is not straight forward for the Boussinesqsystem. Therefore we will always consider the time t in thissection.Using initialdata of the form u(x,0)=U(x)+exp(-x^2), η(x,0)=N(x), onegets the solution shown in Fig. <ref>. It can beseen that a slightly larger soliton emerges, and that the remainingenergy is radiated to the left.As in the case of the Whitham equation inFig. <ref>, the rapid decrease of the solitons tothe Boussinesq system allows to study numerically solitoninteractions. We consider as initial data the solitons with c=1.05and c=1.1, the latter being centered at x=-3. The solution tothe Boussinesq system for these initial data can be seen inFig. <ref>. Visibly both u and ηshow the behavior of the KdV two-soliton. Note that the solution iscomputed in a frame commoving with c=1.05.However the solution at t=100 in Fig. <ref> showsupon closer inspection, in particular the close-up on the right, thatthere is dispersive radiation propagating to the left. Thus despite asimilarity to the KdV two-solitons, the Boussinesq systems does notappear to be completely integrable. §.§ Numerical study of the Boussinesq system for more generalinitial data In Fig. <ref> we show the solution to the Boussinesqsystem (<ref> for the initial u(x,0)=0 andη(x,0)=10exp(-x^2). The solution breaks at t=0.4115since the fit of the Fourier coefficients to (<ref>) indicatesthat a singularity in the complex plane hits the real axis. Thefitted coefficient μ=0.345 implies the formation of a cusp. Thesolution at the critical time is shown in Fig. <ref>.The behavior is similar to the cusp formation in solutions to thesemiclassical NLS equation, see e.g. <cit.>.For ϵ=0.1 and the same initial data, the solution breaks att=2.2262 with μ=.423, i.e., again a cusp as shown inFig. <ref>. But this time the behavior isdifferent from Fig. <ref>: the solution follows for acertain time the underlying wave equations and two humps are formingfrom the initial hump. But dispersion is too weak to overcome thenonlinearity and cusps form in finite time. Note that there is noclear scaling for the blow-up time in dependence of ϵ heresince the mechanisms for the blow-up appear to be different. For even smaller values of ϵ, the solution for the sameinitial data appears to be global in time. As can be seen inFig.<ref> for ϵ=0.01, just two `solitons'appear to emerge from the initial hump. For sufficiently small ϵ, thesystem appears to be close to the wave equation. For negative w, the system (<ref>) is ill-posed. For thenumerical studies, this implies that the Krasny filter must be usedat a higher level (we choose 10^-10) in order to controlnumerical instabilities. If we consider the initial data u(x,0)=0,η(x,0)=-10exp(-x^2), i.e., the same data which led toFig. <ref> except for the sign of w, we find thatthe solution has a blow-up for t=0.17 as shown inFig. <ref>. The fitting of the Fourier coefficientsaccording to (<ref>) yields μ=0.2029, i.e., again a cusp.For the same initial data, but ϵ=0.1, a blow-up very similarto Fig. <ref> is observedfor t=0.579 with μ=1.15. In the case of even smallerϵ=0.01, the solution appears to be again close to thesolution of the wave equation as in Fig. <ref>.The effects of the ill-posedness of the system for negative initialdata are more visible in the presence of oscillations. InFig. <ref> we show the solution to the systemfor the initial data u(x,0)=0 and η=sin(10x)e^-x^2. The codebreaks at t=0.0898 since a singularity in the complex plane appearsto hit the real axis. The fitting of the Fourier coefficientsaccording to (<ref>) is not conclusive (μ=0.0619) whetherthis is a cusp or a pole. §.§ Numerical study of the Boussinesq system with surfacetensionIf we study the same initial data as in the previous subsection forthe Boussinesq system with surface tension, the effects of thestronger dispersion are clearly visible. The initial data of Fig. <ref> lead in thepresence of surface tension with β=1 to the solution inFig. <ref>. The solution appears to exist for alltimes, instead of a shock a dispersive shock wave is observed.For negative η(x,0), the situation in Fig. <ref> onthe other hand is not really changed. Again a blow-up appears infinite time despite a surface tension of β=1 as can be seen inFig. <ref>. The code breaks at t=0.2568 withμ=-0.05 after fitting the Fourier coefficients according to(<ref>). § CONCLUSION We have tried in this paper to answer some key questions on the Whitham equation and systems. As far as modeling of water waves is concerned, the Whitham equation is relevant in the KdV (Boussinesq) regime and then it is probablynot better than the KdV equation itself (and actually it is never used in realistic water waves modeling). On the other hand, because of its dispersion relation that behaves drastically differently for large and small frequencies, it has a fascinating variety of dynamical behaviors, most of them displayed in numerical simulations, that deserve further mathematical investigations.The Boussinesq-Whitham systems, however, appear to have almost no interest for the modeling of water waves, since the local Cauchy problem (in absence of surface tension) can be solved only for initial data on the elevation that satisfies a rather unphysical condition. Its mathematical interest is also limited since its long wave limit is an ill-posed Boussinesq system.The Cauchy problem for the system with surface tension while having similar shortcomings to modelwater waves,leads howeverto an interesting open mathematical question. The Authors were partiallysupported by the Brazilian-French program in mathematics and the MathAmSud program. J.-C. S. acknowledges support from the project ANR-GEODISP of the Agence Nationale de la Recherche. F.L and D.P. were partially supported by CNPq and FAPERJ/Brazil. J.-C. S. thanks David Lannes for useful discussions related to this work. amsplain 99 AMPP. Acevez-Sanchez, A.A. Minzoni and P. Panayotaros, Numerical study of a nonlocal model for water-waves with variable depth, Wave Motion, 50 (2013), 80-93.AlBo J. P. Albert and J. L. Bona, Comparisons between model equations for long waves, J. Nonlinear Sci., 1 (1991), 345–374.AlBoSaJ. Albert, J. L. Bona and J.-C. Saut, Model equations for waves in stratified fluids,Proc. Royal Soc. London A, 453, (1997), 1233–1260. ABM A. Ambrose, J.L. Bona and T. Milgrom, Global solutions and ill-posedness for the Kaup system and related Boussinesq systems, Preprint (2017). AngJ. Angulo Pava, Stability properties of solitary waves for fractional KdV and BBM equations, arXiv:1701.06221v1[math.AP]22 Jan 2017.ArM.A. Arnesen, Existence of solitary-wave solutions to nonlocal equations, Disc. Cont. Dyn. Syst. A, 36 (7)(2016), 3483-3510.BCS1 J. L. Bona, M. Chen and J.-C. Saut, Boussinesq equations and other systems for small-amplitude long waves in nonlinear dispersive media I : Derivation and the linear theory, J. Nonlinear Sci., 12 (2002), 283-318.BoPrSc J. L. Bona, W. G. Pritchard and L. R. Scott, A comparison of solutions of two model equations for long waves, Lect. Appl. Math., 20 (1983), 235–267.BSJ.L. Bona and J.-C. Saut, Dispersive Blow-Up II.Schrödinger-TypeEquations, Optical and Oceanic Rogue Waves, Chin. Ann. Math. Series B, 31, (6), (2010), 793-810.BoSm J.L. Bona and R. Smith, The initial value problem for the Korteweg-de Vries equation, Philos. Trans. R. Soc. Lond., Ser. A, 278 (1975), 555–601.BMcRJ.L. Bona, W.R. McKinney and J.M. Restrepo, Stable and unstable solitary-wave solutions of the generalized long-wave equation, J. Nonlinear Sci., 10 (2000), 603-608.BKNH. Borluk, H. Kalisch and D.P. Nicholls, A numerical study of the Whitham equation for steady surface water waves, J. Comput. Appl. Math., 296 (2016), 293-302.BEPG. Bruell, M. Ehrnstrom and L. Pei, Symmetry and decay of traveling wave solutions to the Whitham equation, J. Diff. Eq., 262 (2017), 4232-4254. CaJ.D. Carter, Bidirectional Whitham equations as models of waves in shallow water, arXiv:1705.06503v1 [physics.flu-dyn] 18 May 2017.CG J. D. Carter and D. George, The Whitham equation as a model of water waves, preprint (2016).DMDKE. Dinvay, D. Moldabayev, D. Dutykh and H. Kalisch, The Whitham equation with surface tension, preprint (2016).DGK B. Dubrovin, T. Grava and C. Klein,Numerical Study of breakup in generalized Korteweg–de Vries and Kawahara equations, SIAM J. Appl. Math., 71 (2011),983-1008.DKMD. Dutykh, H. Kalish and D. Moldabayev, The Whitham equation as a model for surface waves, Physica D, 309 (2015), 99-107.EhEsPe M. Ehrnström, J. Escher, and L. Pei, A note on the local well-posedness for the Whitham equation, Elliptic and parabolic equations, vol. 119 of Springer Proc. Math. Stat., Springer, Cham, 2015, 63–75.EhGr M. Ehrnström and M. Groves, Solitary wave solutions to the full dispersion Kadomtsev-Petviashvili equation, to appear.EGW M. Ehrnström, M.D. Groves and E. Wahlén, On the existence and stability of solitary-wave solutions to a class of evolution equations of Whitham type, Nonlinearity, 25 (2012), 2903–2936.EKM. Ehrnström and H. Kalisch, Global bifurcationfor the Whitham equation, Math.Mod. Nat. Phenomena, 8 (2013), 13-30.EK2M. Ehrnström and H. Kalisch, Traveling waves for the Whitham equation, Diff. and Int. Eq., 29 (11-12) (2009), 1193-1210.EW M. Ehrnström and E. Wahlén, On Whitham's conjecture of a highest cusped wave for a nonlocal dispersive equation, arXiv:1602.05384v1 [math.AP] 17 Feb 2016.FLR.L. Frank and E. Lenzmann, On the uniqueness and non-degeneracy of ground states of (-Δ)^s Q+Q-Q^α +1=0 in, Acta Math., 210 (2) (2013), 261–318.HuV. Hur, Breaking in the Whitham equation for shallow water waves, arXiv:1506.04075v1 [math.AP] 12 Jun 2015.Hu2V. Hur, Norm inflation for equations of KdV type with fractional dispersion,arXiv:1701.03354v1[math.AP] 12 Jan 2017. HJ V. Hur and M. A. Johnson, Modulational instability in the Whitham equation for water waves, Studies in Appl. Math.,134 (2014), 120-143.Hu-PaV. Hur and A.K. Pandey, Modulational instability in a full-dispersion shallow water model, arXiv:1608.04685v1 [math.AP] 16 Aug 2016.Hu-TaoV. Hur andL. Tao, Wave breaking for the Whitham equation with fractional dispersion, Nonlinearity, 27 (2014), 2937-2949.Hu-Tao-IIV. Hur andL. Tao, Wave breaking in a shallow water model, arXiv:1608.04681v2 [math.AP] 23 Sep 2016.KaD.J. Kaup, A higher-order water-wave equation and the method for solving it, Progr. Theoret. Phys., 54 (1975), 396-408.KaPo T. Kato and G. Ponce,Commutator estimates and the Euler and Navier-Stokes equations, Comm. Pure Appl. Math., 41 (1988), 891–907.etna C. Klein, Fourth order time-stepping for low dispersion Korteweg-de Vries and nonlinear Schrödinger equations, ETNA 29 (2008), 116-135.KPC. Klein and R. Peter, Numerical study of blow-up in solutions to generalized Kadomtsev-Petviashvili equations, Discr. Cont.Dyn.Syst. B,19 (6), (2014), 1689-1717.KP2C. Klein and R. Peter, Numerical study of blow-up in solutions to generalized Korteweg-de Vries equations,Physica D,304-305 (2015), 52-78.KSC. Klein and J.-C. Saut, A numerical approach to blow-up issues for dispersive perturbations of Burgers equation,Physica D, 295-296 (2015), 46-65.krasny R. Krasny, A study of singularity formation in avortex sheet by the point-vortex approximation, J. Fluid Mech., 167 (1986), 65-93. KuB.A. Kupperschmidt, Mathematics of dispersive water waves, Commun. Math. Phys., 99 (1985), 51-73.La1D. Lannes, Water waves : mathematical theory and asymptotics, Mathematical Surveys and Monographs, vol 188 (2013), AMS, Providence.LS D. Lannes and J.-C. Saut, Remarks on the full dispersion Kadomtsev-Petviashvili equation, Kinetic and Related Models, American Institute of Mathematical Sciences, 6 (4) (2013), 989–1009. LPSF. Linares, D. Pilod and J.-C. Saut, Remarks on the orbital stability of ground state solutions of fKdV and related equations,Advances Diff. Eq., 20 (9/10), (2015), 835-858. LPS2 F. Linares, D. Pilod, and J.-C. Saut, Dispersive perturbations of Burgers and hyperbolic equations I: local theory, SIAM J. Math. Anal., 46 (2014), 1505-1537. LPS3 F. Linares, D. Pilod, and J.-C. Saut, The Cauchy problem for the fractionary Kadomtsev-Petviashvili equations, arXiv:1705.09744v1 [math.AP] 27 May 2017.MaA. Majda,Incompressible Fluid Flow and Systems of Conservation Laws in Several Space Variables, (Heidelberg: Springer) 1984.MM Y. Martel and F. Merle, Blow up in finite time and dynamics of blow up solutions for the L^2-critical generalized KdV equation, J. Amer. Math. Soc., 15 (3)(2002), 617–664.MMRY. Martel and F. Merle, and P. Raphaël, Blow up for thecritical gKdV equation I: dynamics near the solitary wave. Acta Math., 212 (1), 59–140 (2014).MPY. Martel and D. Pilod, Construction of a minimal mass blow up solution of the modified Benjamin-Ono,preprint (2016), to appear in Mat. Annal., arXiv:1605.01837.MeB. Mélinand, A mathematical study of meteo and landslide tsunamis : the Proudman resonance, Nonlinearity, 28 (2015), 4037-4080.MesB. Mésognon-Gireau, A dispersive estimate for the linearized water-waves equations in finite depth, J. Math. Fluid Mech., (2016). doi:10.1007/s00021-016-0286-1.MPVL. Molinet, D. Pilod and S. Vento, On well-posedness for some dispersive perturbations of Burgers' equation, arXiv:1702.03191v1[math.AP]10 Feb 2017.MST L. Molinet, J.-C.Saut and N. Tzvetkov, Ill-posedness issues for the Benjamin-Ono and related equations, SIAM J. Math. Anal., 33 (4) (2001), 982–988.RKF. Remonato and H. Kalisch, Numerical bifurcation for the capillary Whitham equation, Physica D, 343 (2017), 51- 62.GMRES Y. Saad and M. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., 7 (3) (1986), 856-869.SKCKN. Sanford, K. Kodama, J.C. Carter and H. Kalisch, Stability of traveling wave solutions to the Whitham equation, Physics Letters A, 378 (2014), 2100-2107.RTF. Rousset and N. Tzvetkov, Transverse nonlinear instability for two-dimensional dispersive models, Annales Inst. H. Poincaré Ana. N. Lin., 26 (2009), 477-496. Sa J.-C. Saut, Sur quelques généralisations de l'équation de Korteweg-de Vries, J. Math. Pures Appl., 58 (1979), 21–61.SWX J.-C. Saut, Chao Wang and Li Xu, TheCauchy problem on large time for surface waves Boussinesq systems II, arXiv:1511.08824v1 [math.AP] 27 Nov 2015 and SIAM J. Math. Anal. to appear.SSF C. Sulem, P. Sulem, and H. Frisch, Tracing complex singularities with spectral methods, J. Comp. Phys., 50 (1983),138-161.WhiG. B. Whitham, Variational methods and applications to water waves, Proc.R. Soc. Lond. Ser. A., 299 (1967), 6-25.ZaV.E. Zakharov, Weakly nonlinear waves on the surface of an ideal finite depth fluid, Amer. Math. Soc. Transl., 182 (2) (1998), 167-197.
http://arxiv.org/abs/1706.08712v1
{ "authors": [ "C. Klein", "F. Linares", "D. Pilod", "J. -C. Saut" ], "categories": [ "math.AP" ], "primary_category": "math.AP", "published": "20170627081735", "title": "On Whitham and related equations" }
^1Departamento de Física, Universidad de Santiago de Chile (USACH), Av. Ecuador 3493, 9170124 Santiago, Chile^2Center for the Development of Nanoscience and Nanotechnology (CEDENNA), 9170124 Santiago, Chile In this letter we calculate the energies corresponding to the different magnetic phases present in a ferromagnetic cylinder by means of analytical calculations. From the comparison of these energies, it is possible to construct magnetic phase diagrams as a function of the uniaxial anisotropy of the sample and the external magnetic field applied. As proof of concept, we analyzed the magnetic phase diagrams for a cobalt dot of 240 nm in diameter and 70 nm in length, with an easy axis parallel to the dot axis, and with a magnetic field applied towards or perpendicular to this axis. From these diagrams we have obtained the stability regions for a Bloch skyrmion (Sk), a vortex core (VC) and a ferromagnetic (F) configuration, which can point in any ψ direction. Our results provide a pathway to engineer the formation and controllability of a skyrmion in a ferromagnetic dot to different anisotropy constants and magnetic fields. Analytical and numerical K_u - B phase diagrams for cobalt nanostructures: stability region for a Bloch skyrmion J. Escrig^1,2 December 30, 2023 ================================================================================================================§ INTRODUCTION Skyrme was the first to describe the baryons as topological defects of continuous fields <cit.>. Since then, skyrmions have been found in various systems, such as ferroelectrics <cit.>, liquid crystals <cit.>, magnetic materials <cit.>, among others. For example, topological magnetic skyrmions <cit.> have been observed in several bulk <cit.> and thin film <cit.> systems, and have been proposed for potential applications in non-volatile magnetic memories <cit.> because the spin texture topology protects the skyrmions from scattering with structural defects, allowing them to be moved by small current densities, opening a new paradigm for the manipulation of magnetization at the nanoscale <cit.>. Besides, skyrmions exhibit emergent electromagnetic phenomena, such as topological Hall effect and the skyrmion Hall effect <cit.>, and have been proposed as information carriers in novel magnetic sensors and spin logic devices <cit.>.Isolated skyrmions confined in cylindrical nanostructures <cit.> are considered to be promising for implementations in information storage and processing devices on the nanoscale <cit.>. In these nanostructures both the Dzyaloshinskii-Moriya interaction (DMI) and the magnetic anisotropy are required to stabilize a Neel skyrmion (NS) <cit.>, where the magnetic profile has a magnetic component in the radial direction, so they cannot be seen in conventional ferromagnetic materials (Co, Ni, etc.). On the other hand, the Bloch skyrmions (BS), which do not have magnetic component in the radial direction, can be stabilized in the absence of DMI, provided there is a magnetic anisotropy <cit.>. These systems show potential for room temperature control of skyrmions <cit.>. In this letter, we are interested in obtaining analytical expressions for the energies of different magnetic configurations (ferromagnetic pointing in any direction, vortex core and Bloch skyrmion without DMI) in a cobalt nanodot that allow us to generate magnetic phase diagrams with regions of stability for each configuration as a function of the uniaxial anisotropy and the external magnetic field. In addition, we will carry out micromagnetic simulations for some particular cases, in order to be able to support the theoretical model used.§ ANALYTICAL MODEL We adopt a simplified description of the system, where the discrete distribution of the magnetic moments is replaced with a continuous one characterized by a slow variation of the magnetization M⃗ (r⃗) = M_0m̂(r⃗) <cit.>, whose direction is given by the unitary vector m̂(r⃗) while that M_0 corresponds to the saturation magnetization. Due to the cylindrical symmetry of the nanoparticle, it is convenient to rewrite the magnetization vector as m̂ (r⃗) = m_r(r⃗) r̂ + m_ϕ(r⃗) ϕ̂ + m_z(r⃗) ẑ, where r̂, ϕ̂ and ẑ are the unitary vectors of the cylindrical coordinates. We consider a cylindrical nanoparticle of radius R and length L which exhibits an uniaxial anisotropy whose axis of easy magnetization is parallel to the symmetry axis of the particle (chosen as the z-axis), and which is under the action of an external magnetic field B⃗ applied at an angle θ with respect to the z-axis, as shown in Fig. 1a. The total energy for this nanoparticle is given by E = ∫_V (-K_u m_z^2+μ_0/2M_0m⃗·∇⃗ U_d+A∑_i=x,y,z( ∇⃗ m_i)^2- M_0m⃗·B⃗)dV , wherein the first, second, third and fourth term corresponds to the uniaxial anisotropy, the dipolar energy, the exchange energy and the Zeeman energy, respectively. Here K_u, A and μ_0 are the anisotropy constant, stiffness constant and magnetic permeability, respectively, while U_d is the well-known magnetostatic potential defined as <cit.> 4π U_d(r⃗)=∫ G(r⃗,r⃗')(n̂·M⃗(r⃗')- ∇⃗·M⃗(r⃗')), with G(r⃗,r⃗ ')= |r⃗ - r⃗'|^-1 being the Green function. In the previous definition of U_d(r⃗), the first integral is over the surface of the nanoparticle, while the second is on its volume.We are interested in obtaining phase diagrams showing the stability regions for the following magnetic configurations: skyrmion (Sk), ferromagnetic (F) and vortex core (VC). §.§ Skyrmion configuration (Sk) For the description of a skyrmion configuration we have used the Ritz model proposed in <cit.>: m_z^(Sk)(r)=1-(r/R_s)^n/1+(r/R_s)^nwhere R_s is the radius of skyrmion and n is a positive even integer number, n = 2, 4, 6, 8, ⋯. It is important to mention that the components in the plane of the magnetization are given by m_r = 0 and m_ϕ= 1- m_z^2. As an example, in Fig. 1c we show the profile of the magnetization of a skyrmion, in a nanoparticle of R = 120 nm, obtained from the Eq. 2 for n = 10 and R_s = 90 nm. The m_z component is shown as a density color plot, while the m_r component is represented by arrows.§.§ Ferromagnetic configuration (F) As we have considered a competition between the uniaxial anisotropy (which favors the magnetization to point along the z-axis) and the external magnetic field, which forms an angle θ with respect to the z-axis, as shown in Fig. 1a, we have used a ferromagnetic configuration whose direction is allowed to point at an angle ψ with respect to the z-axis, as shown in Fig. 1b. m̂^(F) = cosψ ẑ + sinψ r̂§.§ Vortex core configuration (VC) Finally, we have also considered a vortex core configuration, for which we have used the Ritz model previously investigated by <cit.> m_z^(VC)(r) = { [1 - (r/R_v)^2 ]^2 , 0 ≤ r ≤ R_v 0 , otherwise. where m_r = 0 and m_ϕ =1-m_z^2, while R_v corresponds to the core size. As an example, in Fig. 1d we show the profile of the magnetization of a vortex core, in a nanoparticle of R = 120 nm, obtained from the Eq. 4 for R_v = 90 nm. To obtain the minimum energy configuration for a given set of geometric parameters (R and L) and magnetic (A, M_0, K_u and B⃗), we calculate the energy of each magnetic configuration, for which we replace the corresponding ansatz (Eqs. 2, 3 and 4) within Eq. 1, and we minimize with respect to R_s, ψ and R_v, respectively. In the case of skyrmion, we have to choose a value of n, for which we have performed an analysis similar to the one performed in <cit.>, obtaining that n = 10 is a reliable value to correctly describe a skyrmion state.§ MICROMAGNETIC SIMULATIONS In order to validate the analytical calculations, we have investigated the minimum energy configuration of a cobalt dot of radius R=120 nm and length L=70 nm by micromagnetic simulations <cit.>. In this article we have considered an out-of-plane magnetic anisotropy, which is generally obtained when the cobalt is deposited on a platinum or palladium substrate <cit.>. We have used a saturation magnetization M_0=1.4× 10^6 A/m, an exchange stiffness A=3× 10^-11 J/m and a Gilbert damping constant α= 0.5. In addition, we use four possible initial magnetic configurations (skyrmion, vortex core, ferromagnetic out of plane and ferromagnetic in plane), which are allowed to relax as a function of B and K_u values, to finally compare the total energies between the different final configurations to which the system evolved. In order to obtain the results in a reasonable time, the discretization of the system was 3 × 3 × 5 nm^3. The phase diagrams showed in Section IV were obtained using the analytical equations of Section II, nevertheless some points of these diagrams were also obtained through micromagnetic simulations.§ RESULTS AND PHASE DIAGRAMS The equations presented in section II are quite general and allow to investigate a magnetic dot with different geometric and magnetic parameters. As an example, and by comparing the energy curves for the different magnetic configurations, we have obtained the K_u - B phase diagrams for the studied cobalt nanostructures (M_0=1.4× 10^6 A/m and A=3× 10^-11 J/m) of radius R=120 nm and length L=70 nm, in the range of 0 ≤ K_u ≤ 2 MJ/m^3 and 0 ≤ B ≤ 0.5 T for both θ = 90^∘ and θ = 0^∘, which are shown in Figs. 2 and 3, respectively. In these phase diagrams we have called F_0-30, F_30-60 and F_60-90 to the ferromagnetic configuration with 0^∘≤ψ < 30^∘, 30^∘≤ψ < 60^∘ and 60^∘≤ψ≤ 90^∘, respectively. The transition lines between two magnetic configurations were obtained analytically using steps of 0.01 T and 0.1 MJ/m^3 for B and K_u, respectively.As can be seen from Fig. 2, when the magnetic field is in the in-plane direction, for values of K_u < 0.5 MJ/m^3 and B < 0.17 T the cobalt nanostructure presents a VC configuration, whereas if the uniaxial anisotropy constant increases, a Sk-phase is reported, which is stable even for magnetic fields close to 0.35 T. On the other hand, due to the competition between uniaxial anisotropy (which favors a magnetization parallel to the z-axis) and the external magnetic field (which favors a magnetization perpendicular to the z-axis), a stable ferromagnetic phase appears, whose magnetization points in the whole range of angles, that is, 0^∘≤ψ≤ 90^∘. On the other hand, in Fig. 3 we analyze the situation if the magnetic field points in the same direction as the uniaxial anisotropy (z-axis). In this case, and although the VC phase is still present only for low K_u values, it is now stable for the entire range of magnetic fields investigated. If we increase the value of K_u, a fairly extensive region appears where the Sk-phase is stable, covering the entire range of magnetic fields investigated. It is important to mention that when θ = 0^∘, the only surviving ferromagnetic phase is ψ = 0^∘. From both figures, we can conclude that for K_u = 0, the Sk-phase is not stable, regardless of whether the magnetic field is applied at θ = 0^∘or at θ =90^∘.The inset plots of Figs. 2 and 3 show the behavior of the Sk-radius as a function of the uniaxial anisotropy constant K_u, for different values of B when θ = 90^∘ and θ = 0^∘, respectively. As can be seen R_s decreases as K_u increases, nevertheless for θ = 90^∘, R_s does not depend on the intensity of the magnetic field, instead for θ = 0^∘, where the Rs-curves change for different B-values. Indeed R_s decreases as B increases, due to the core of the Sk-magnetization (r < R_s) points in the opposite direction of the magnetic field, in order to minimize its energy. We have compared the Sk-energy curves for cores pointing in both directions (+z and -z), founding that in the whole Sk-phase of Fig. 3, the Sk-energy curve with core in the opposite direction of B⃗ is always below of the corresponding Sk-energy curve with core in the same direction of the field, while for θ = 90^∘, both Sk-energy curves have the same values.Importantly, from the developed micromagnetic simulations, and for the geometric and magnetic parameters investigated in this paper, we have only obtained the theoretically proposed magnetic configurations, and we have not observed complex phases such as the helical magnetic phase. In addition, it is worth mentioning that the numerical results (showed in Fig. 4) for the marked red points in Figs. 2 and 3 have an extraordinary agreement with the analytical phase diagrams. Figure 4a shows that the vortex core is slightly offset from the center of the cylinder, which could slightly reduce the energy of this configuration. This breaking of azimuthal symmetry is out of the focus of this article.§ CONCLUSIONS In conclusion, by the analytical calculation of the energies corresponding to the different magnetic phases present in a ferromagnetic dot, we were able to prepare the K_u-B magnetic phase diagrams for a cobalt dot of radius R=120 nm and length L=70 nm presenting the stability region for a Bloch skyrmion (Sk), a vortex core (VC) and a ferromagnetic (F) configuration. In general, and regardless of the angle at which the external magnetic field is applied, a cobalt dot will exhibit a Bloch skymion for K_u > 0.5 MJ/m^3 and low magnetic field values. It is important to mention that the radius of the skyrmion decreases with an increase in the uniaxial anisotropy constant, and as the magnetic field intensity increases (for a magnetic field pointing towards the z-axis), for which case we found that the skyrmion core points in the opposite direction of the magnetic field. This preferred core direction disappears when the magnetic field points in the direction perpendicular to the symmetry axis, and the skyrmion radius becomes independent of the intensity of the magnetic field. In addition, the results have been validated by micromagnetic simulations, which exhibit an excellent agreement with the analytical results. Finally, these analytical results which allow to obtain magnetic phase diagrams with the stability region of Bloch skyrmions, will be key for the design of future devices based on the manipulation of the magnetic skyrmions.§ ACKNOWLEDGEMENTSWe thank V. Salinas-Barrera for his insightful commments. This work was supported by Fondecyt Grant 1150952, DICYT Grant 041731EM-POSTDOC from VRIDEI-USACH, Financiamiento Basal para Centros Científicos y Tecnológicos de Excelencia FB0807, and Conicyt-PCHA/Doctorado Nacional/2014.99skyrme62 T. H. R. Skyrme, Nucl. Phys. 31, 556 (1962). NPL+15 Y. Nahas, S. Prokhorenko, L. Louis, Z. Gui, I. Kornev, and L. Bellaiche, Nat. Commun. 6, 8542 (2015). ATS+14 P. J. Ackerman, R. P. Trivedi, B. Senyuk, J. van de Lagemaat, and I. I. Smalyukh, Phys. Rev. E 90, 012505 (2014). NYT12 N. Nagaosa, X. Z. Yu, and Y. Tokura, Philos. Trans. A 370, 5806 (2012). FBT+16 G. Finocchio, F. Buttner, R: Tomasello, M. Carpentieri, and M. Klaui, J. Phys. D: Appl. Phys. 49, 423001 (2016). MBJ+09 S. Muhlbauer, B. Binz, F. Jonietz, C. Pfleiderer, A. Rosch, A. Neubauer, R. Georgii, and P. Boni, Science 323, 915 (2009). MNA+10 W. Munzer, A. Neubauer, T. Adams, S. Muhlbauer, C. FRanz, F. Jonietz, R. Georgii, P. Boni, B. Pedersen, M. Schmidt, A. Rosch, and C. Pfleiderer, Phys. Rev. B 81, 041203(R) (2010).YOK+10 X. Z. Yu, Y. Onose, N. Kamazawa, J. H. Park, J. H. Han, Y. Matsui, N. Nagaosa, and Y. Tokura, Science 336, 198 (2012). SYI+12 S. Seki, X. Z. Yu, S. Ishiwata, and Y. Tokura, Science 336, 198 (2012). YKO+11 X. Z. Yu, N. Kanazawa, Y. Onose, K. Kimoto, W. Z. Zhang, S. Ishiwata, Y. Matsui, and Y. Tokura, Nat. Mater. 10, 106 (2011). FSK+13 M. Finazzi, M. Savoini, A. R. Khorsand, A. Tsukamoto, A. Itoh, L. Duo, A. Kirilyuk, Th. Rasing, and M. Ezawa, Phys. Rev. Lett. 110, 177205 (2013). JUZ+15 W. Jiang, P. Upadhayaya, W. Zhang, G. Yu, M. B. Jungfleisch, F. Y. Fradin, J. E. Pearson, Y. Tserkovnyak, K. L. Wang, O. Heinonen, S. G. E. te Velthuis, and A. Hoffmann, Science 349, 283 (2015). WZX+16 W. Wang, Y. Zhang, G. Xu, L. Peng, B. Ding, Y. Wang, Z. Hou, X. Zhang, X. Li, E. Liu, S. Wang, J. Cai, F. Wang, J. Li, F. Hu, G. Wu, B. Shen, and X. -X. Zhang, Adv. Mater. 28, 6887 (2016).MMR+16 C. Moreau-Luchaire, C. Moutafis, N. Reyren, J. Sampaio, C. A. F. Vaz, N. Van Horne, K. Bouzehouane, K. Garcia, C. Deranlot, P. Warnicke, P. Wohlhuter, J. -M. George, M. Weigand, J. Raabe, V. Cros, and A. Fert, Nat. Nanotech. 11, 444 (2016). WLK+16 S. Woo, K. Litzius, B. Kruger, M. -Y. Im, L. Caretta, K. Richter, M. Mann, A. Krone, R. M. Reeve, M. Weigand, P. Agrawal, I. Lemesh, M. -A. Mawass, P. Fischer, M. Klaui, and G. S. D. Beach, Nat. Mat. 15, 501 (2016). RHM+13 N. Romming, C. Hanneken, M. Menzel, J. E. Bickel, B. Wolter, K. von Bergmann, A. Kubetzka, and R. Wiesendanger, Science 341, 636 (2013). JMP+10 F. Jonietz, S. Muhlbauer, C. Pfleiderer, A. Neubauer, W. Munzer, A. Bauer, T. Asams, R. Georgii, P. Boni, R. A. Duine, K. Everschor, M. Garst, and A. Rosch, Science 330, 1648 (2010).NT13 N. Nagaosa, and Y. Tokura, Nat. Nanotech. 8, 899 (2013). LLK+17 K. Litzius, I. Lemesh, B. Kruger, P. Bassirian, L. Caretta, K. Richter, F. Buttner, K. Sato, O. A. Trekiakov, J. Forster, R. M. Reeve, M. Weigand, L. Bykova, H. Stoll, G. Schutz, G. S. D. Beach, and M. Klaui, Nature Phys. 13, 170-175 (2017). ZEZ15 X. Zhang, M. Ezawa, and Y. Zhou, Scientific Resports 5, 9400 (2015).SCR+13 J. Sampaio, V. Cros, S. Rohart, A. Thiaville, and A. Fert, Nature Nanotech. 8, 839-844 (2013).RT13 S. Rohart, and A. Thiaville, Phys. Rev. B 88, 184422 (2013).BCW+15 M. Beg, R. Carey, W. Wang, D. Cortés-Ortuño, M. Vousden, M. Bisotti Albert, D. Chernyshenko, O. Hovorka, R. L. Stamps, and H. Fangohr, Scientific Reports 5, 17137 (2015).DWT+13 Y. Y. Dai, H. Wang, P. Tao, T. Yang, W. J. Ren, and Z. D. Zhang, Phys. Rev. B 88, 054403 (2013).Guslienko15 K. Guslienko, IEEE Magnetic Letters 6, 4000104 (2015).SA16 M. A. Castro, and S. Allende, J. Magn. Magn. Mater. 417, 344-348 (2016).GG16 K. Y. Guslienko, and Z. V. Gareeva, IEEE Magnetics Letters 8, 4100305 (2016). BVY+16 O. Boulle, J. Vogel, H. Yang, S. Pizzini, D. de Souza Chaves, A. Locatelli, T. Onur Mentes, A. Sala, L. D. Buda-Prejbeanu, O. Klein, M. Belmeguenai, Y. Roussigne, A. Stashkevich, S. Mourad Cheirf, L. Aballe, M. Foerster, M. Chshiev, S. Auffret, I. Mihai Miron, and G. Gaudin, Nature Nanotech. 11, 449 (2016).FCS13 A. Fert, V. Cros, and J. Sampaio, Nat. Nanotech. 8, 152 (2013). SCF+13 J. Sampaio, V. Cros, A. Fert, S. Rohart, and A. Thiaville, Nat. Nanotech. 8, 839 (2013). MWY+15 B. F. Miao, Y. Wen, M. Yan, L. Sun, R. X. Cao, D. Wu, B. You, Z. S. Jiang, and H. F. Ding, Appl. Phys. Lett. 107, 222402 (2015). GMB+15 D. A. Gilbert, B. B. Maranville, A. L. Balk, B. J. Kirby, P. Fischer, D. T. Pierce, J. Unguris, J. A. Borchers, and K. Liu, Nature Comm. 6, 8462 (2015). Aharoni96 A. Aharoni, Introduction to the Theory of Ferromagnetism (Clarendon, Oxford, 1996).VRE17 N. Vidal-Silva, A. Riveros, and J. Escrig, arXiv: 1705.03778v1 [cond-mat.mes-hall] (2017).LEA+05 P. Landeros, J. Escrig, D. Altbir, D. Laroze, J. d'Albuquerque e Castro, and P. Vargas, Phys. Rev. B 71, 094435 (2005).MAL+10 J. Mejia-Lopez, D. Altbir, P. Landeros, J. Escrig, A.H. Romero, Igor V. roshchin, C.-P. Li, M. R. Fitzsimmons, X. Battle, and Ivan K. Schuller, Physical Review B 81, 184417 (2010). RVL+16 A. Riveros, N. Vidal-Silva, P. Landeros, D. Altbir, E. E. Vogel, and J. Escrig, J. Magn. Magn. Mater. 401, 848-852 (2016).oommf M. J. Donahue, and D. G. Porter, OOMMF User's Guide 1.0. Interagency Report NISTIR 6376. National Institute of Standards and Technology, Gaithersburg, MD (1999). Fang2013 Y. Fang, Magnetic thin films with graded or tilted anisotropy for spintronics devices, PhD thesis. University of Gothenburg, Sweden (2013).OMS+99 T. Ono, H. Miyajima, K. Shigeto, and T. Shinjo, J. Magn. Magn. Mater. 198-99, 225-227 (1999).KMZ+17 M. Krupinski, D. Mitin, A. Zarzycki, A. Szkudlarek, M. Giersig, M. Albrecht, and M. Marszalek, Nanotechnology 28, 085302 (2017). MOV+17 S. Michea, S. Oyarzun, S. Vidal, and J. C. Denardin, AIP Advances 7, 056310 (2017).
http://arxiv.org/abs/1706.08876v2
{ "authors": [ "A. Riveros", "N. Vidal-Silva", "F. Tejo", "J. Escrig" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170627141921", "title": "Analytical and numerical $K_u - B$ phase diagrams for cobalt nanostructures: stability region for a Bloch skyrmion" }
α β̱ γ δ̣ ϵ εŁΛ L øωB̅^0 B^0- B_s B̅_̅s̅ / ΔΓ_s ΔM_s ΔΓ_d ΔM_d∂ ∥ ∇ →̊ ⟶ ⟹ ⟨⟨||⟩⟩R(D)R(D^*)R(D^(*))R_KR_K^*R_K^(*)sinθcosθγΓ_dΓ_sΓ_h [ -.4ex∼.4ex<-.4ex∼.4ex> A_λ^L A_λ^R A_0^L A_0^R A_∥^L A_∥^R A_⊥^L A_⊥^RReIm A_t#1| #1 |#1(#1)#1#1#1.5ex<.5ex∼.5ex>.5ex∼^1Department of Physics and Astrophysics, University of Delhi, Delhi 110007, India ^2Department of Physics, University of Calcutta, 92 Acharya Prafulla Chandra Road, Kolkata 700009, India ^3Institute of Mathematical Sciences, HBNI, Taramani, Chennai 600113, India It is a challenging task to explain, in terms of a simple and compelling new physics scenario, the intriguing discrepancies between the standard model expectations and the data for the neutral-current observables R_K and R_K^*, as well as the charged-current observables R(D) and R(D^*).We show that this can be achieved in an effective theory with only two unknown parameters.In addition, this class of models predicts some interesting signatures in the context of both B decays as well as high-energy collisions. 13.20.He, 14.40.Nd, 11.30.FsMinimal unified resolution to R_K^(*) and R(D^(*)) anomalies with lepton mixing Debajyoti Choudhury^1, Anirban Kundu^2, Rusa Mandal^3 andRahul Sinha^3=============================================================================== Introduction and the data– Several recent hints of discrepancies in a few charged- as well as neutral-current semileptonic decays of B-mesons have intrigued the community. Unlike the case for fully hadronic decay modes that suffer from large (and, in some cases, not-so-well understood) strong interaction corrections, the theoretical uncertainties in semileptonic decays are much better controlled.Even these uncertainties are removed to a great extent in ratios of similar observables.While, individually, none of the observables, militate against the standard model (SM), viewed together, they strongly suggest that some new physics (NP) is lurking around the corner <cit.>.The pattern also argues convincingly for the violation of lepton-flavor universality.With the ratios of partial widths being particularly clean probes of physics beyond the SM, on account of the cancellation of the leading uncertainties, let us focus on R(D) and R(D^*) defined as≡ BR(B→ D^(*)τν)/ BR(B→ D^(*)ℓν) ,  ℓ∈{e,μ}and analogous ratios for the neutral-current sector ≡ BR(B→ K^(*)μμ)/ BR(B→ K^(*)e e) . With the major source of uncertainty in the individual modes being the form factors, they largely cancel out[The cancellationworks best for relatively large momentum transfers (where the leptonicmass effects are negligible), the region with the best data.] in ratios like or , and the SM estimates for these ratios are rather robust.Several measurements ofandby the  <cit.>, Belle <cit.>, andLHCb <cit.> Collaborations indicated an upward deviation from theSM expectations. Combining the individual results,namely, R(D) = 0.407± 0.039± 0.024and R(D^*)= 0.304± 0.019± 0.029,the discrepancies are at ∼ 2.3σ and ∼ 3.4 σ respectively. On the inclusion of the correlation between the data, the combined significance is at the ∼ 4.1 σ level <cit.> from the SM predictions <cit.>. The data onand , on the other hand, lie systematicallybelow the SM expectations <cit.>:rclcl =0.745^+0.090_-0.074± 0.036q^2 ∈ [1:6]GeV^2 , ^ low = 0.660^+0.110_-0.070± 0.024 q^2 ∈ [0.045:1.1]GeV^2 , ^ cntr =0.685 ^+0.113_-0.069± 0.047 q^2 ∈ [1.1:6]GeV^2 . For bothand ^ cntr, the SM predictions are virtually indistinguishable from unity <cit.>, whereas for ^ low it is ∼ 0.9 (owing to a finite m_μ). Except for ^ low, the theoretical uncertainties have beensubsumed in the experimental ones.Thus the measurements of , ^ low and ^ cntr, respectively, correspond to 2.6σ, 2.1σ and 2.4σ shortfalls from the SM expectations.For the K^* mode, a discrepancy is visible not only in the ratios of binned differential distribution for muon and electron modes but also in some angular distributions, like the celebrated P'_5 <cit.> anomaly for the decay B→ K^* μμ<cit.>, at more than 3σ.Restricting ourselves toonly the low and medium-q^2 region, namely, q^2≤ 6 ^2 (as the high-q^2 region can be affected by a different kind of physics <cit.>), we do not include this anomaly in our analysis. However, we see later that our fitted Wilson coefficients can explain this discrepancy as pointed out in globalfits <cit.>.A similar suppression (at a level of approximately 3σ) is seen in the observable Φ≡ dBR(B_s→ϕμμ)/ d m_μμ^2 in the analogous bin (m_μμ^2∈[1:6]GeV^2)  <cit.>, namely,Φ = {-5pt lcl(2.58^+0.33_-0.31± 0.08± 0.19) × 10^-8  GeV^-2 ( exp.)(4.81 ± 0.56) × 10^-8  GeV^-2 ( SM) .. With low theoretical error, this bin is virtually the same as that forand ^ cntr.This suggests strongly that the discrepancies in the latter have been caused by a depletion of the b → s μμ channel, rather than an enhancement in b → s e e, a surmise further vindicated by theP'_5 anomaly. Note that P'_5 is dominated by thevector operator O_9, while the two-body decay B_s→μμis controlled by the axial vector operator O_10, both of them defined later.With possible corrections from large ΔΓ_s, as well as next-to-leading-order (NLO) electroweak and next-to-next-to-leading-order QCD corrections calculated, the SM prediction is quite robust with only small uncertainties accruing from the Cabibbo-Kobayashi-Mashkawa (CKM) matrix elements and the decay constant of B_s.The LHCb measurement at a significance of 7.8σ<cit.> shows an excellent agreement between the data and the SM: BR(B_s →μμ)= {-5pt lcl(3.0± 0.6^+0.3_-0.2)×10^-9 -5pt ( exp.),(3.65 ± 0.23) × 10^-9 -5pt ( SM) , . and hence puts very strong constraints on NP models, in particular on those incorporating (pseudo)scalar or axial-vector currents <cit.>. However, note that the central value can accommodate a ∼ 20% suppression. Thus, one is naturally led to models that preferentially alter O_9 rather thanO_10. Similarly, neither the radiative decay B→ X_sγ nor the mass difference Δ M_s and mixing phase ϕ_s measurements for the B_s system show any appreciable discrepancy with the SM expectations.The pattern of deviations is thus a complicated one and, naively at least,does not appear to show a definite direction towards any well-motivated NP model.Consequently, most efforts at explaining the anomalies consider only a subset, eitherand/ordata <cit.>, orand b→ s ℓℓ data <cit.>. Those that do attempt a more complete treatment either invoke very complicated models, or result in fits that are not very good.In addition, they are liable to result in other unacceptable phenomenological consequences. Analyses within specific models, like leptoquarks, are available in the literature <cit.>.In view of this, we adopt a very phenomenological approach, rather than advocate a particular model.Assuming an effective Lagrangian, with the minimal number of new parameters, in the guise of the unknown Wilson coefficients (WCs), we seek the best fit. While not an entirely new idea, our analysis takes into account not only the anomalous channels but also the existing limits on several other channels; as we will show, they provide the tightest constraints on the parameter space. This approach hopefully will pave the way to unravelling the as yet unknown flavor dynamics.Models– Within the SM, the b→ c τν̅_τ transition proceeds through a tree-level W exchange. If the NP adds coherently to the SM, one can write the effective Hamiltonian as H^ eff = 4G_F/√(2) V_cb(1+)[(c,b) (τ,ν_τ)] , where the NP contribution is parametrized by vanishes in the SM limit and we have introduced the shorthand notation(x,y) ≡x̅_L ^μ y_L∀ x,y .To explain the data, one thus needs either small positive or large negative values of .The flavor-changing neutral-current decays B→ K^(*)μμ andϕμμ areoccasioned by the b→ s μμ transition proceeding, within the SM,primarilythrough a combination of the penguin and the box diagrams (driven, essentially, by the top quark). Parametrizing the ensuing effective Hamiltonian as H^ eff = -4G_F/√(2)V_tbV_ts^* ∑_iC_i(μ) 𝒪_i(μ) ,where the relevant operators are rcl𝒪_7 = (α_ em(m_b) m_b / 4 π)(s̅σ_μν P_R b) F^μν , 𝒪_9 = (α_ em(m_b) / 4 π)(s̅_μ P_L b) (μ̅^μμ), 𝒪_10=(α_ em(m_b) / 4 π)(s̅_μ P_L b) (μ̅^μ_5 μ). The WCs, matched with the full theory at m_W and then run down to m_b at the next-to-next-to-leading logarithmic accuracy <cit.>, are given in the SM as C_7 = -0.304,C_9 = 4.211 and  C_10= -4.103. The differentialwidths for the B→ K^(*)μμ decay are obtained in terms ofalgebraic functions of these. NP contributions to H^ effcan be parametrized by C_i → C_i + C_i^ NP. Similarly, the b→ s νν̅transition (which governs the B→ K^(*)νν̅ decays)proceeds through the Z penguins and box diagrams.Unless right-handed neutrino fields are introduced, thelow energy effective Hamiltonian can be parametrized by <cit.> H^ eff = 2G_F/√(2)V_tbV_ts^* α_em/πC_L^ SM(1+ ) (s,b)(ν,ν), wheredenotes the NP contribution. Including the NLO QCD correction and the two-loop electroweak contribution, the SM WC is given by C_L^ SM=-X_t/s_w^2 where the Inami-Lim function X_t= 1.469 ± 0.017 <cit.>.While it may seem trivial to write down extra four-fermi operators that would produce just the right contributions, care must be taken to see that this does not introduce unwelcome consequences. For one, a large enhancement of C_10 could lead an unacceptably large BR(B_s →μμ), with 𝒪_10 being the leading contributor to this decay. Similarly, the said four-fermi operators need to be invariant under the SM gauge group (assuming that the NP appears only above the electroweak scale). A non-zero(see Eq.(<ref>)) would, potentially, lead to an analogue of C_10^ NP for the tau-channel.This, in turn, would lead to an enhancement of B_s →ττ, where the chirality suppression is less operative than in the muonic case.Indeed, the LHCb Collaboration <cit.> has obtained a 95% C.L. upper limit of 6.8 × 10^-3 on the branching fraction for this mode[It should be noted, though, that this analysis does not actually reconstruct the τs, but employs neural networks.Hence, it is possible that future measurements would point to a value higher than the limits quoted.], with the SM value being (7.73± 0.49)× 10^-7<cit.>. Similarly, none of the three operators (b,s) (ν_i, ν_i) may receive large corrections lest the SM expectations, namely <cit.>rcl BR(B^+ → K^+ νν̅)_ SM= (3.98 ± 0.43 ± 0.19)× 10^-6 ,BR(B^0 → K^*0νν̅)_ SM = (9.19 ± 0.86 ± 0.50) × 10^-6,be augmented[Note that the neutrino flavors need not be identical for the NP.]to levels beyond the 90% C.L upper bounds (summed over all three neutrinos) as obtained by the Belle Collaboration <cit.>,viz. BR(B→ K^(*)νν̅) < 1.6 (2.7) × 10^-5. In view of the aforementioned constraints, we consider only a combination of two four-fermi operators, characterized by a single WC (assumed to be real to avoid new sources of CP violation).Since we do not claim to obtain the ultraviolet completion thereof, we do not speculate on the (flavor) symmetry that would have led to such a structure, which could have arisen from a plethora of NP scenarios, such as models of (gauged) flavor, leptoquarks (or, within the supersymmetric paradigm, a breaking of R parity) etc. To wit, we propose a model involving two four-fermi operators in terms of the second- and third-generation (weak-eigenstate) fieldsH^ NP = A_1(Q̅_2L_μ L_3L)(L̅_3L^μ Q_3L) + A_2(Q̅_2L_μ Q_3L) (τ̅_R ^μτ_R)where the overall Clebsch-Gordan coefficients have been subsumedand we demand A_2 = A_1. This operator, seemingly, contributes tobut not to the other anomalous processes.This, though, is true only above the electroweak scale. Below this scale, the Hamiltonianneeds to be rediagonalized[With NP only modifying theWilson coefficients of certain SM operators to a small extent, the QCD corrections (as well as hadronic uncertainties) are analogous.Additional effects due to operator mixings aretoo small to be of any concern.] In the quark sector, this is determined by the quark masses and the small non-alignment due to A_1,2 can be neglected. In the leptonic sector, though, the extreme smallness of the neutrino masses implies that the nonuniversal term H^ NP plays a major role <cit.>. To this end, we consider the simplest of field rotations for the left-handed leptons from the unprimed (flavor) to the primed (mass) basis, namelyτ =τ' +μ', ν_τ =ν_τ' +ν_μ' . This, immediately, generates a term with the potential to explain the b→ s μμ anomalies. Results— The scenariois, thus, characterized by twoparameters, namely A_1 and sinθ. The best fit values for these can be obtained by effecting a χ^2-test defined throughχ^2= ∑_i=1^7(𝒪_i^ exp-𝒪_i^ th)^2/(Δ𝒪_i^ exp)^2+ (Δ𝒪_i^ th)^2where 𝒪_i^ exp (𝒪_i^ th) denote the experimental (theoretical) mean and Δ𝒪_i^ exp (Δ𝒪_i^ th) the corresponding 1σ uncertainty, with the theoretical values depending on the model parameters.We include a total of seven measurements for the evaluation of χ^2, namely, , , , ^ low, ^ cntr, Φ, and BR(B_s→μμ) (while not affected by the NP interactions in Eq. (<ref>), this is relevant for the scenario considered later). Only for the last two observables, do Δ𝒪_i^ th need to be considered explicitly , while, for the rest, they have been subsumed within the experimental results. For our numerical analysis, we useV_cb=0.0416  and  V_tbV_ts^*=-0.0409,and find, for the SM, χ^2_ SM≃ 46.Within the new model, the best fit corresponds to χ^2_ min≃ 9(denoting a marked improvement) with the NP contributions beingC_9^ NP=-1.7 and C^ NP=-2.12. In terms of the model parameters, this corresponds to (note that there is a θ→ -θ degeneracy) A_1 (=A_2) = -2.92 ^-2 , sinθ=±0.022 , Even this low value of χ^2_ min is largely dominated by a single measurement, namely, ^ low. This is not unexpected, as an agreement to this experimental value to better than 1 σ is not possible if the NP contribution can be expressed just as a modification of the SM WCs, rather than through the introduction of a new and small dynamical scale (such a changecould be tuned so as to manifest itself primarily onlyin the low-q^2 region, but is likely to have other ramifications).Note that the small value of sinθ can only partially explain the atmospheric neutrino oscillation, while the full explanation needs additional dynamics.More importantly, in effecting the field rotation of Eq. (<ref>) in H^ NP, we generate terms of the form (s, b)(μ, τ), leading to potential lepton-flavor violating (LFV) decays. The current limits on the relevant ones are <cit.> BR(B^+→ K^+μ^±τ^∓) < 4.5 (2.8) × 10^-5 . In Fig. <ref>, we display the constraints from this particular mode. While the best-fit point is summarily ruled out, clearly solutions can be found if a slight worsening of the χ^2 (to ≃ 15) is acceptable. This would still represent a much better agreement than is possible within the SM. The corresponding values of the observables are: R_K=0.86, ^ cntr=0.88, ^ low=0.90, R (D^(*))= 1.25 × R_ SM(D^(*)), and Φ=4.1 ×10^-8 ^-2, representing quite a reasonable fit to all but ^ low. It should be noted here that the θ→ - θ degeneracy is broken by the LFV constraint, with θ > 0 being slightly preferable.Further improving the fit torequires the introduction of a small bit of C_10^ NP. Postponing the discussion of B_s →ττ, this is most easily achieved if we choose to destroy, to a small degree, the relation A_2 = A_1. As an illustrative example, we consider A_2= 4 A_1/5. The consequent best fit values for A_1 andremain virtually the same but, now, χ^2_ min=7 with NP contributions being =-1.51, =0.17 and C^ NP=-2.12. The result is depicted in Fig. <ref>.Once the LFV constraint is imposed, the observables at the overlap region are R_K≃0.80, ^ cntr≃0.83, ^ low≃0.88, R (D^(*)) ≃ 1.24 × R_ SM(D^(*)), and Φ≃3.8 ×10^-8 ^-2, showing marked improvement in the fit to all but ^ low and correspond to χ^2≃ 10.While the finite contribution todoes enhance B_s →ττ, the latter (gray shaded region in Fig. <ref>)does not have a major impact. It should be realized, though, that a stronger breaking of the A_2 = A_1 relation would have led to a better (worse) agreement with the LFV (B_s →ττ) constraints. It is interesting to speculate on the origin of this split between the A_i. A naive explanation would be to attribute the difference to the quantum numbers of the leptonic fields under an as yet unidentified gauge symmetry, with the attendant anomaly cancellation being effected by either invoking heavier fermionic fields or through other means. Care must be taken, however, not to induce undesirable phenomenology. An alternative is to attribute the difference to quantum corrections,although the aforementioned shift is somewhat larger than that expected from a naive renormalization group flow perspective, namely, ∼ (α_ wk. / 4 π) ln(Λ_ NP^2 / m_b^2), where Λ_ NP∼1 TeV is the putative scale of NP. It should be noted here, though, that the 20% shift is only illustrative and not really needed. Indeed, once the electroweak symmetry is broken, the various pieces in H^ eff suffer differing renormalization group flow down to the m_b scale, and the consequent breaking of the degeneracy is, putatively, of the right magnitude to explain the remaining discrepancies. It is worthwhile, at this stage, to explore the consequences of introducing other operators in H^ NP. While operators constructed out of SU(2)_L-triplet currents (denoted by the subscript `3')such as (Q̅_2L^μ Q_3L)_3 (L̅_3L_μ L_3L)_3, (Q̅_2L^μ L_3L)_3 (L̅_3L_μ Q_3L)_3, etc., would also have admitted solutions to the anomalies, they, typically, would also result in unsuppressed b → s νν̅ transitions. Circumventing the bounds would, then, require the introduction of multiple operators and cancellations between them. We will discuss such possibilities in detail in a subsequent paper.This would, typically, still leave behind too large a rate for B_s →ττ (first reference of <cit.>) and, hence needs the further introduction of yet another operator such as the second one in H^ NP. Apart from enhancing B_s →ττ (B → X_s ττ and Λ_b →Λττ are affected too, but bounds from these sectors are not too serious), this would also affect the other modes to varying degrees. Consequently, the best fit values will change. Indeed a lower χ^2 (≃ 5.4) is achievable for virtually the same A_1, but slightly smaller |sinθ| (≃ 0.018).Understandably, if both the B_s →ττ boundas well that in Eq. (<ref>) are to be satisfied, the χ^2 can be reduced to at most ≃ 11. Similarly, BR(B→ X_sττ), as well as BR(Λ_b→Λττ) will also be increased and should beclose to observation at the LHCb.However, processes like b→ sγ or τ→μγ will remain under control, as we have checked. Similarly, while we do not “explain”(g-2)_μ, the agreement ismarginally better than within the SM. The new operators also generate, through renormalization group running, operators involving four leptons <cit.>, and thus may lead to effects like τ→ 3μ. They are,however, well within control, mostly because of the small value of sinθ.In summary, we have identified the minimal modification to the SM in terms of an effective theory that can explain the anomalies in both the charged- and the neutral-current decays of bottom mesons, a task that has been challenging on account of the seemingly contradictory requirements that the data demand. We circumvent this by postulating just two four-fermi operators with WCs related by a symmetry and taking advantage of the possibility of a small but nontrivial rotation of the charged lepton fields that a flavor-nonuniversal operator entails.Taking all the data into account, we find that with just two new parameters, the χ^2 can be reduced from 46 (in the SM) to below 15 while being consistent with all other data. For the best fit point, most observables are consistent within ∼ 1σ, while R_K^* and BR(B →ϕμμ) in the low-q^2 bins, are consistent to only within ∼ 2σ. The scale of new physics that such an explanation demands is a few TeV at best, rendering searches at the LHC to be very interesting. An even stronger preference is that at least one of B→ K^(*)μτ and B_s →ττ should be close to discovery. A more precise determination of the ratios that we have discussed in this Letter is, therefore, of prime importance, as this can open the door to new flavor dynamics and hence the world beyond the SM.A.K. thanks the Science and Engineering Research Board (SERB), Government of India, for a research grant. D.C. thanks the IMSc, Chennai for hospitalityfor part of the duration of the collaboration. 99globalfitW. Altmannshofer and D.M. Straub,Eur. Phys. J. C73, 2646 (2013); S. Descotes-Genon, J. Matias and J. Virto,Phys. Rev. D88, 074002 (2013); Bhattacharya:2014wlaB. Bhattacharyaet al., Phys. Lett. B742, 370 (2015);R. Mandal and R. Sinha,Phys. Rev. D95, 014026 (2017); A. Karanet al., Phys. Rev. D95, 114006 (2017).Lees:2013uzd J.P. Leeset al. [BaBar Collab.],Phys. Rev. D88, 072012 (2013) Huschle:2015rga M. Huschleet al. [Belle Collab.],Phys. Rev. D92, 072014 (2015) Abdesselam:2016cgx A. Abdesselamet al. [Belle Collab.],arXiv:1603.06711 [hep-ex].Aaij:2015yra R. Aaijet al. [LHCb Collab.],Phys. Rev. Lett.115, 111803 (2015). LHCb_rdst_3prong http://lhcb-public.web.cern.ch/lhcb-public/Welcome.html#RDst2 hfag Y. Amhiset al.,arXiv:1612.07233 [hep-ex], and the update at http://www.slac.stanford.edu/xorg/hfag/semi/ fpcp17/RDRDs.html.Fajfer:2012vxS. Fajfer, J.F. Kamenik and I. Nisandzic,Phys. Rev. D85, 094025 (2012); D. Bigi and P. Gambino,Phys. Rev. D94, 094008 (2016).bifani R. Aaijet al. [LHCb Collaboration],arXiv:1705.05802 [hep-ex].1406.6482 R. Aaijet al. [LHCb Collab.],Phys. Rev. Lett.113, 151601 (2014). sm-pred G. Hiller and F. Kruger,Phys. Rev. D69, 074020 (2004);M. Bordone, G. Isidori and A. Pattori,Eur. Phys. J. C76, 440 (2016);P5`-defS. Descotes-Genonet al., JHEP1301, 048 (2013)LHCb:2015dlaR. Aaijet al. [LHCb Collab.],JHEP1602, 104 (2016),Ciuchini:2015qxbM. Ciuchiniet al., JHEP1606, 116 (2016); Aaij:2015esaR. Aaijet al. [LHCb Collab.],JHEP1509, 179 (2015) Altmannshofer:2014rtaW. Altmannshofer and D.M. Straub,Eur. Phys. J. C75, 382 (2015) Straub:2015icaA. Bharucha, D.M. Straub and R. Zwicky,JHEP1608, 098 (2016)BsmumuR. Aaijet al. [LHCb Collab.],Phys. Rev. Lett.118, 191801 (2017) BsmumuSMC. Bobethet al.,Phys. Rev. Lett.112, 101801 (2014) Fleischer:2014jaaR. Fleischer,Int. J. Mod. Phys. A29, 1444004 (2014); oldlit C. Bobethet al., Phys. Rev. D64, 074014 (2001); G. Hiller and M. Schmaltz,Phys. Rev. D90, 054014 (2014); F. Beaujean, C. Bobeth and S. Jahn,Eur. Phys. J. C75, 456 (2015);L. Calibbi, A. Crivellin and T. Ota,Phys. Rev. Lett.115, 181801 (2015);A. Crivellinet al., Phys. Rev. D92, 054013 (2015); D. Bečirević et al.,Phys. Rev. D94, 115021 (2016); D. Daset al., Phys. Rev. D94, 055034 (2016);arXiv:1705.09188 [hep-ph]; D. Choudhuryet al., Phys. Rev. D95, 035021 (2017); S. Bhattacharya, S. Nandi and S.K. Patra,Phys. Rev. D95, 075012 (2017);B. Bhattacharyaet al.,JHEP1701, 015 (2017);D. Bardhan, P. Byakti and D. Ghosh,JHEP1701, 125 (2017); W. Altmannshofer, P.S.B. Dev and A. Soni,arXiv:1704.06659 [hep-ph].Altmannshofer:2008dz W. Altmannshoferet al.,JHEP0901, 019 (2009). rknew B. Capdevilaet al.,arXiv:1704.05340 [hep-ph];W. Altmannshofer, P. Stangl and D.M. Straub,arXiv:1704.05435 [hep-ph];G. D'Amicoet al.,arXiv:1704.05438 [hep-ph];G. Hiller and I. Nisandzic,arXiv:1704.05444 [hep-ph];L.S. Genget al., arXiv:1704.05446 [hep-ph];M. Ciuchiniet al.,arXiv:1704.05447 [hep-ph];A. Celiset al., arXiv:1704.05672 [hep-ph];D. Bečirević and O. Sumensari,arXiv:1704.05835 [hep-ph];Y. Caiet al., arXiv:1704.05849 [hep-ph];J.F. Kamenik, Y. Soreq and J. Zupan,arXiv:1704.06005 [hep-ph];F. Sala and D.M. Straub,arXiv:1704.06188 [hep-ph];S. Di Chiaraet al.,arXiv:1704.06200 [hep-ph];D. Ghosh,arXiv:1704.06240 [hep-ph];A.K. Aloket al., arXiv:1704.07347 [hep-ph]; A.K. Aloket al.,arXiv:1704.07397 [hep-ph]; C. Bonillaet al., arXiv:1705.00915 [hep-ph];D. Bardhan, P. Byakti and D. Ghosh,arXiv:1705.09305 [hep-ph];C.W. Chianget al., arXiv:1706.02696 [hep-ph].crivellin-march17 A. Crivellin, D. Müller and T. Ota,arXiv:1703.09226 [hep-ph];I. Dorsneret al., arXiv:1706.07779 [hep-ph].MET_SM A.J. Buraset al.,JHEP1502, 184 (2015).MET_SM_old M. Misiak and J. Urban,Phys. Lett. B451, 161 (1999);G. Buchalla and A.J. Buras,Nucl. Phys. B548, 309 (1999). Aaij:2017xqtR. Aaijet al. [LHCb Collab.],arXiv:1703.02508 [hep-ex]. belle17J. Grygieret al. [Belle Collab.],arXiv:1702.03224 [hep-ex].ggl S.L. Glashow, D. Guadagnoli and K. Lane,Phys. Rev. Lett.114, 091801 (2015). pdg K.A. Oliveet al. [Particle Data Group Collab.],Chin. Phys. C38, 090001 (2014) and the 2015 update athttp://pdg.lbl.gov.paradisi F. Feruglio, P. Paradisi and A. Pattori,arXiv:1705.00929 [hep-ph]. ]
http://arxiv.org/abs/1706.08437v3
{ "authors": [ "Debajyoti Choudhury", "Anirban Kundu", "Rusa Mandal", "Rahul Sinha" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170626152431", "title": "Minimal unified resolution to $R_{K^{(*)}}$ and $R(D^{(*)})$ anomalies with lepton mixing" }
Subaru Telescope, National Astronomical Observatory of Japan, National Institutes of Natural Sciences (NINS), 650 North A'Ohoku Place, Hilo, HI, 96720, U.S.A. MQ Photonics Research Centre, Department of Physics and Astronomy, Macquarie University, NSW 2109, Australia Australian Astronomical Observatory, 105 Delhi Rd, North Ryde NSW 2113, Australia Steward Observatory, University of Arizona, Tucson, AZ, 85721, U.S.A. College of Optical Sciences, University of Arizona, Tucson, AZ 85721, U.S.A. Astrobiology Center of NINS, 2-21-1, Osawa, Mitaka, Tokyo, 181-8588, Japan Sydney Institute for Astronomy (SIfA), Institute for Photonics and Optical Science (IPOS), School of Physics, University of Sydney, NSW 2006, Australia Centre for Ultrahigh-bandwidth Devices for Optical Systems (CUDOS) Laboratoire Lagrange, Université Côte d’Azur , Observatoire de la Côte d’Azur, CNRS, Parc Valrose, Bât. H. FIZEAU, 06108 Nice, FrancePhotonic technologies offer numerous advantages for astronomical instruments such as spectrographs and interferometers owing to their small footprints and diverse range of functionalities. Operating at the diffraction-limit, it is notoriously difficult to efficiently couple such devicesdirectly with large telescopes. We demonstrate that with careful control of both the non-ideal pupil geometry of a telescope and residual wavefront errors, efficient coupling with single-mode devices can indeed be realised. A fibre injection was built within the Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) instrument. Light was coupled into a single-mode fibre operating in the near-IR (J-H bands) which was downstream of the extreme adaptive optics system and the pupil apodising optics. A coupling efficiency of 86% of the theoretical maximum limit was achieved at 1550 nm for a diffraction-limited beam in the laboratory, and was linearly correlated with Strehl ratio. The coupling efficiency was constant to within <30% in the range 1250–1600 nm. Preliminary on-sky data with a Strehl ratio of 60% in the H-band produced a coupling efficiency into a single-mode fibre of ∼50%, consistent with expectations. The coupling was >40% for 84% of the time and >50% for 41% of the time. The laboratory results allow us to forecast that extreme adaptive optics levels of correction (Strehl ratio >90% in H-band) would allow coupling of >67% (of the order of coupling to multimode fibres currently) while standard levels of wavefront correction (Strehl ratio >20% in H-band) would allow coupling of >18%. For Strehl ratios <20%, few-port photonic lanterns become a superior choice but the signal-to-noise, and pixel availability must be considered. These results illustrate a clear path to efficient on-sky coupling into a single-mode fibre which could be used to realise modal-noise-free radial velocity machines, very-long-baseline optical/near-IR interferometers and/or simply exploit photonic technologies in future instrument design.Efficient single-mode fibre injection N. Jovanovic et. al. Efficient injection from large telescopes into single-mode fibres: Enabling the era of ultra-precision astronomy N. Jovanovic1,[email protected], C. Schwab2,3, O. Guyon1,4,5,6, J. Lozi1, N. Cvetojevic3,7,8, F. Martinache9,S. Leon-Saval7, B. Norris7, S. Gross2,8, D. Doughty1, T. Currie1 and N. Takato1 December 30, 2023 ========================================================================================================================================================================================================= § INTRODUCTIONThe key to precision astronomical measurements is the ability to carefully calibrate the data. The quality of calibration would be greatly improved if it were possible to operate at the diffraction-limit. In this case we mean that there is no wavefront error (i.e. a perfectly flat wavefront) and the point-spread function (PSF) is temporally invariant (stable with time). In the field of high-contrast imaging for example, this would enable a perfect PSF subtraction, revealing structures at unprecedented contrast around the host star, such as an exoplanet or disk. In the area of precision radial velocity, one of the key limitations is the stability of the slit (or fibre) illumination <cit.>. A temporally invariant, diffraction-limited PSF would eliminate this issue and result in a consistently high level of performance by virtue of improved calibration for the data. Finally, interferometers would deliver stable fringes, enhancing the contrast with which faint structures around a host star could be studied. This would result in a more precise determination of the objects' properties <cit.>.However, ground-based observations are hampered by atmospheric turbulence. One solution is to operate the instrument in space, free from the atmosphere. This comes at great cost and complexity. More commonly, adaptive optics (AO) systems are used upstream of the instrument. AO systems are based on measuring the incident wavefront and subsequently correcting for it with a deformable mirror capable of operating at high speed (100s of Hz). Such instruments are now ubiquitous at large (>4 m-class) observatories <cit.>. These systems offer Strehl ratios of 20-50% in the H-band in median seeing. This translates to 200-300 nm RMS residual wavefront error, which presents an improvement with which data can be calibrated, but is far from the ideal scenario. Recently, several large observatories have commissioned so-called "extreme" AO (ExAO) systems capable of delivering 90% Strehl ratios in better than median seeing in the H-band <cit.>. These systems reduce the residual wavefront error down to ∼80 nm RMS in H-band by controlling a larger region around the PSF <cit.> in the focal plane and doing so at higher speed than conventional AO systems (refer to <cit.> for full details). Although these systems do not deliver the perfectly flat wavefront desired, they certainly approach this limit and improve the quality of calibrated astronomical data. To flatten the wavefront post-ExAO correction to the ideal case (i.e. 0 nm RMS), some sort of spatial filtering is needed. This can be achieved in one of two ways: The light can be passed through a pinhole or a single-mode fibre (SMF) <cit.>. A SMF is named as such because it can only transport light through the fibre in one state, the fundamental mode. This mode has a near-Gaussian intensity profile and a flat phase front (to be precise, there are two identical modes, which have orthogonal polarisations). When light is passed through such a fibre, it exits with these properties imprinted on the beam. Most importantly however, the output intensity profile of a SMF is temporally invariant. This makes a SMF the ideal way to feed light to an astronomical instrument for precision measurements. Indeed, a discussion highlighting the impact of a SMF feed on stellar spectroscopy was presented by <cit.> and more recently treated in detail by <cit.>. Once the light is in the SMF, a suite of photonic technologies that operate at the diffraction-limit become accessible for instrument design. Photonic components, either optical-fibre-based or on a wafer, offer a diverse range of functionality including the possibility for spectral filtering <cit.>, dispersion <cit.>, reformatting <cit.> and calibration <cit.> to name a few. In addition, photonic devices are compact and robust and can easily be stabilised to a high degree. This makes these devices highly desirable for implementation in astronomical instrument design.However,efficiently coupling a SMF to a large telescope is inherently difficult because of the need to match the intensity distribution and phase front of the incident beam to that of the fundamental mode of the SMF. Early work was conducted by <cit.> investigated injecting seeing-limited light from two 0.8 m telescopes into SMFs optimised for the K-band for the purposes of conducting interferometric measurements with the FLUOR beam combiner. This was followed by <cit.> who demonstrated a coupling efficiency of 20-25% in K-band into a fluoride-based SMF on the 3.6 m La Silla Telescope behind the ADONIS AO system. This pioneering work achieved over 50% of the theoretical maximum coupling (39%) for the experiential setup used. It was primarily limited by the relatively large central obstruction (43.6% of the primary), quasi-static low-order aberrations and a 25 Hz telescope vibration. Since this time there have been several other demonstrations of SMF injection for both spectroscopy <cit.> and interferometry <cit.>. More recently, light has been coupled with AOassistance from one of the LBT 8-m mirrors to a SMF in the y-band <cit.>. Preliminary results from that work yielded a coupling efficiency in the range of 20–25%, which is high considering the relatively short wavelength.In the era of ExAO equipped telescopes, achieving high coupling efficiencies becomes a distinct possibility and forms the basis for this study. In this body of work, we demonstrate efficient coupling into a SMF (>70%) is possible behind an ExAO system with the use of pupil apodisation optics for the first time. This work builds upon the recent results presented in <cit.>. Section <ref> gives an overview of how to couple into SMFs supported by simulations. Section <ref> outlines the experimental setup used while Section <ref> summarises the results of the detailed coupling efficiency study conducted in the laboratory, which includes a comparison with a few-port photonic lantern. Section <ref> highlights the preliminary on-sky results while Section <ref> goes into a deeper discussion of their implications.Section <ref> rounds off the paper with a summary of the key results. § HOW TO EFFICIENTLY INJECT LIGHT INTO SINGLE-MODE FIBRES §.§ Requirements for optimum couplingAs outlined in the introduction, a SMF guides light in the fundamental mode with a Gaussian intensity profile and a flat phase front. In order to maximise coupling of light into this mode, the incident beam should match these specifications closely. The mode can be characterised in a number of ways. The mode field diameter (MFD) specifies the size of the mode and is measured to the point where the intensity profile drops to 1/e^2 of the peak value. The fibre also has a numerical aperture (NA) that stipulates the angles of rays that can be accepted by the fibre and is measured to the 1% intensity point of the far-field emanating from the fibre. Carefully matching the properties of a Gaussian beam to the MFD and NA is important for SMFs. If we take the case of a typical near-IR optimised SMF such as SMF-28 (from Corning), it offers a MFD =10.4 μm at 1550 nm and a NA=0.14. To match the spot size, a focal ratio f/#=5.27 Gaussian beam (measured to the 1/e^2 point) would need to be used. Any deviations in the f/# would lead to a reduction in the coupling efficiency. This corresponds to a NA for the beam of 0.095, measured to the 1/e^2 points, or a NA of 0.15 measured to the 1% points (more typical for fibres), which is little above the cutoff of the fibre but very close. What is important though is that the spot size of a Gaussian beam is matched to the MFD for optimum performance. In cases where the input beam is not a Gaussian, the optimum beam parameters to maximise coupling need to be calculated using the overlap integral method described in the following section. Simply matching the beam sizes for two dissimilar profile beams is typically a good starting point, but some optimisation is required.§.§ Considerations for optimising the PSF for efficient coupling In order to determine how to optimise the coupling of light into a SMF, it is instrumental to first examine the PSF of a telescope. A telescope which is uniformly illuminated by starlight will form an Airy pattern in the focal plane. The PSF observed at Subaru Telescope is shown in the top panel of Fig. <ref> as an example with a cross-sectional line profile offered in the bottom panel (red curve).The diffraction rings around the PSF are the result of the fact that the pupil has a finite extent. The phase across the central core of the PSF is constant. Between each concentric ring is a dark region where the phase flips by π. The phase within each ring is constant and continues to flip for each successive ring. It is this constantly flipping phase that induces destructive interference when attempting to couple light from an Airy pattern into a Gaussian mode, which sets the upper limit. The coupling efficiency can be calculated by using the overlap integral of the two fields given byη= |∫_^E_1^*E_2 dA|^2/∫_^|E_1|^2dA∫_^|E_2|^2dA,where E_1 and E_2 are the two complex fields (including amplitude and phase) that are to be overlapped <cit.>. The maximum theoretical coupling efficiency of an unobstructed circular aperture into a SMF is ∼80% <cit.>. An explanation of where this limit comes from is provided in section <ref>. The cross-sectional line profile of the optimum fit Gaussian to the Airy pattern presented in the bottom panel of Fig. <ref>, is displayed as the yellow curve and was determined by evaluating Eq. <ref>. The top panel in Fig. <ref> also clearly shows strong diffraction from the telescope spiders. These push light away from the central peak of the Gaussian mode reducing coupling as well. Finally, quasi-static aberrations in the optical train, which manifest as asymmetries in the PSF core and Airy rings, indicate that the wavefront of the system is not perfectly flat, further reducing the coupling efficiency. The following subsections examine the effect of each of these in turn and outline ways to remedy them. §.§.§ The central obstructionThere are very few unobstructed telescope pupils at most large observatories. The presence of a central obstruction (obstructed circular aperture) moves power out of the central core of the PSF and into the rings (top panels of Fig. <ref>). The left image in the top panel shows a simulated PSF for an unobstructed pupil as compared to the right image, which is for an almost completely obstructed pupil. Both images are logarithmically stretched and it can be seen that successive diffraction rings have more similar peak surface brightness across the image in the case of the highly obstructed pupil. The bottom panel of Fig. <ref> shows the coupling efficiency into a SMF as a function of the size of the secondary obstruction (red curve).Overlaid in the figure are lines that correspond to the size of the secondary obstruction for various well-known observatories. It can be seen that as the relative size of the secondary increases, the coupling efficiency reduces in a monotonic way which becomes quasi-linear beyond a 20% relative size secondary. Since the optimum coupling efficiency is achieved by only including light from the PSF core, as explained above and in section <ref>, then by moving more light into the Airy rings, due to a larger secondary obstruction, the coupling efficiency is expected to reduce as seen in the simulation and was first described by <cit.>. DKIST <cit.>, a telescope optimised for high-contrast imaging observations of the sun and the only off-axis telescope with no obstruction of those chosen, offers the greatest coupling (∼81%). The Gemini Observatory offers the next best coupling efficiency given their relatively small secondary obstruction (∼80%). Subaru Telescope has a maximum coupling efficiency of 65%, which is similar to Keck, while the lowest coupling efficiency is offered by the telescopes optimised for wide-field imaging, Skymapper and LSST. This chart demonstrates that it is important to consider the suitability of the observatory to SMF injection, as the final performance will depend on the geometry of the telescope used.§.§.§ Telescope spidersThe yellow curve in Fig. <ref> (bottom) shows the same coupling efficiency as descrtibed above but this time taking into account the spiders at Subaru Telescope, where the experiments in this study were carried out. The coupling efficiency has been reduced by ∼10% across the board. This is a result of the light being diffracted into high-spatial-frequency features which form a cross-like pattern about the PSF and have a poor overlap with the mode of a fibre (top panel in Fig. <ref>). The 10% loss is higher than for other observatories as Subaru has a heavy primary focus unit that requires thicker supports and the ExAO system (SCExAO) utilises a fixed pupil mask inside the instrument, which has slightly oversized spiders for calibration purposes (spider thickness inside the ExAO system is 4.8% of the size of the outer pupil diameter). Again, by carefully selecting a telescope with thinner secondary supports, it is possible to minimise this coupling loss. It should be made clear at this point that the 10% shown in the figure is only due to the drop in coupling efficiency, and does not take into account the attenuation of the light due to the fact the spiders obscure a portion of the pupil. A method for removing the effect of the spiders was proposed and is outlined in section <ref> for completeness. §.§.§ Quasi-static and turbulent aberrationsBesides the diffractive effects described above, maintaining a flat wavefront is critical to maximising the coupling efficiency, especially when operating at a ground-based observatory. Aberrations that distort the wavefront can be divided into two main categories: Quasi-static and turbulent aberrations. Here we offer some qualitative insight into how each can be addressed to optimise the coupling into the fibre. Quasi-static aberrations are a result of defects in the optical train. These could be low-order aberrations, well represented by Zernike polynomials, which come from optical misalignment or low-quality optics. They can also be high-order aberrations induced by the fabrication process of the optics. They are termed quasi-static because they evolve slowly with time as various optical elements move to track the star (i.e. the telescope pointing, the image rotator, and so on). On the other hand, turbulent aberrations are the result of light propagating through the turbulent atmosphere. These are much faster and vary with the coherence time of the atmosphere (∼5 ms for Maunakea). No matter which type of aberration is present, they both distort the flat wavefront from the star present at the top of the atmosphere, reducing the coupling into a SMF. Indeed, the coupling is correlated with the Strehl ratio and so if the cumulative wavefront error is large (and the Strehl ratio is low), then the coupling will also be low. We come back to this point in the experimental section. To compensate for these aberrations and restore the flat wavefront, wavefront control is required. An AO or ExAO system is critical to address the turbulent aberrations. For this reason we employ the pyramid wavefront sensor in the SCExAO instrument in this body of work <cit.>. However, a pyramid wavefront sensor has no knowledge of what a flat wavefront looks like as it is a relative wavefront sensor and can only maintain the wavefront with respect to some reference. Therefore, in order to set the reference point of the wavefront sensor to a flat wavefront, and remove all quasi-static aberrations in the optical train, an absolute wavefront sensor must be used in addition to the pyramid wavefront sensor. There are numerous solutions to this but one wavefront sensor recently demonstrated on-sky that could be used for this is the asymmetric pupil Fourier wavefront sensor <cit.>. The sensor relies on the Fourier analysis of focal plane images with an asymmetric mask introduced into the pupil of the instrument. It allows for the aberrations in the pupil to be determined by examining the image in the focal plane, that is, where the SMF is located. It was tested both on and off sky on the SCExAO instrument operating in the H-band on the lowest 10 or so Zernike modes and has demonstrated that it can indeed drive the wavefront to a flat solution, and so was used throughout this work. Regardless of which sensor is to be used, it is clear that a combination of wavefront sensors will be required to deal with both fast and slowly varying aberrations that have a range of amplitudes in order to provide a flat wavefront for injection into a fibre. §.§ Pupil apodisationPupil apodisation allows for the edges of the pupil to be softened, which eliminates the Airy rings in the focal plane around the PSF. A promising method that offers lossless pupil apodisation is via Phase Induced Amplitude Apodisation (PIAA) optics <cit.>. This was originally intended for high-contrast imaging of exoplanets at very small angular separation. The concept relies on geometrically remapping of the rays in the pupil in order to redistribute the light. This is typically done with two optics: An optic to push some of the rays at the edge of the pupil inwards and an optic to recollimate the beam. This was first practically demonstrated in the laboratory by <cit.> within an early version of the SCExAO testbed. The apodised pupil profile after the PIAA lenses utilised in SCExAO is shown in the top panel of Fig. <ref> designated as the original PIAA design.It can be seen that the secondary obstruction is completely eliminated by these lenses as the profile has a peak at the centre of the pupil (0 mm). Also, the intensity of the beam decays at larger pupil radii but does not reach 0. This was a deliberate feature of the design process. It was chosen to simplify the sag profiles of the two CaF_2 lenses to minimise features with small radii of curvature which were hard to manufacture by diamond turning at the time. Since the profile does not reach 0, the system is designed to work in conjunction with a binary mask, a glass disk with thin concentric rings of opaque material which increase in number towards the edge of the pupil, that selectively attenuate the light (this is described in detail in <cit.>). In this way the apodisation profile is controlled in two stages. The binary mask by its very nature is a lossy device and reduces the ∼86% throughput of the uncoated PIAA lens pair to ∼55% <cit.> in H-band. The apodisation profile for a second generation PIAA design is also shown in Fig. <ref> and designated as the new PIAA design. This design does not entirely eliminate the secondary but apodises the pupil in a single step, mitigating the need for the binary mask and hence preserving high throughput. This set of PIAA lenses has not been fabricated yet but with AR-coatings the throughput is expected to be 96%. The rest of this work focuses on the original PIAA lenses. Regardless, this shows that careful consideration of the total throughput of the system (throughput of optics and coupling efficiency) must be taken into account when designing an instrument (refer to section <ref> for details). An image of the simulated apodised PSF was calculated from the pupil illumination presented in the top panel of Fig. <ref> and is shown in the bottom panel on both a linear (left) and log (right) scale. It is clear to see that indeed there are no diffraction rings about the PSF as expected and that the PSF is Gaussian-like (i.e. slowly decays away as one moves further off axis). It can also be seen that the diffraction due to the telescope spiders is unaffected by the apodisation process, as expected. The cross-sectional line profile of the apodised PSF and the optimum overlap Gaussian (which represents the mode of the fibre) are represented by the green and blue curves in Fig. <ref>, respectively. The apodised beam is a much better fit to the Gaussian mode of the fibre and indeed yields a vastly improved coupling efficiency of ∼99%, a dramatic improvement over the 65% for the Subaru Telescope pupil without apodisation. The coupling is systematically reduced by ∼8% once again when the telescope spiders are included in the simulation, confirming that indeed the apodisation had no effect on those. It is important to note that the PIAA optics must be redesigned for telescopes with different central obstruction sizes. §.§ Summary of coupling optimisation.Table <ref> shows a summary of the results from the simulations described in Section <ref>. These results are presented for the specific case of the Subaru Telescope central obstruction (size of 31%) including spiders and no wavefront aberrations, as this will be used throughout the remainder of this work. The table shows the theoretical coupling of the apodised and unapodised pupils into a SMF, reiterating the fact that apodisation is beneficial for boosting the coupling. The throughput of the pupil apodising optics is shown in the central portion of the table. A 100% throughput is used to denote that there are no apodisation optics used in the unapodised case. It should be made clear that all other optics in the train are not considered in these values. The final portion of the table shows the combined throughput, defined as the multiplication of the coupling by the throughput of the apodising optics. It can be seen that at present, the large losses due to the binary mask in the original PIAA design outweigh the benefit of pupil apodisation at present. However, the new PIAA design is clearly more efficient and will significantly boost the overall throughput of the system in the near future. Despite the fact there is currently no advantage to using the original PIAA design, we choose to demonstrate the entire optical train as we envision it with the current optics to demonstrate the feasibility of the method.These simulations indicate that pupil apodisation optics optimised in consideration of the central obstruction of the telescope can greatly improve the theoretical coupling efficiency. The following sections outline the results of experimentally coupling light into SMFs with the original PIAA optics simulated here, and indicated by bold face text in the table.§ EXPERIMENT §.§ The SCExAO instrumentIn this section we outline the experimental setup upon which the fibre injection concepts discussed in section <ref> were practically tested. An image of some of the hardware for the injection rig and the photometer is displayed in Fig. <ref>. A schematic layout of the testbed used, namely the SCExAO instrument, is shown in Fig. <ref>.The SCExAO instrument was described in detail by <cit.>; here we only focus on the features that pertain to this body of work. The light entered from AO188 (the facility AO system for Subaru Telescope) which offered a Strehl of 30-40% in median seeing conditions in H-band <cit.>. The light was first collimated by a gold-coated off-axis parabola (OAP) before the pupil was projected onto the 2000 element, MEMS-based deformable mirror (DM) (Boston Micromachines Corporation). The light then passed through a fixed pupil mask which resembled the spider geometry of the telescope (with slightly oversized spiders for ease of alignment). Wavelengths shorter than 950 nm were reflected by a dichroic immediately after the mask and directed up a periscope onto another bench to the pyramid wavefront sensor (PyWFS). The PyWFS is the high-order wavefront sensor which enables both higher spatial and temporal modes of the turbulence to be corrected than with AO188 alone. The PyWFS, still undergoing commissioning during the period of this body of work has achieved a Stehl ratio of up to 80% in H-band and operates on 1000+ modes at 2 kHz with a latency of ∼1 ms. When fully commissioned the sensor will deliver a PSF with a 90% Strehl ratio in the H-band in median seeing conditions on targets brighter than 9^th magnitude.The infrared (IR) light transmitted by the dichroic was next incident on the PIAA lenses discussed in the section above. These can be deployed into the beam at any time or retracted entirely. An achromatic 90:10 beamsplitter plate was used to direct 90% of the flux from y- to K-band towards the fibre injection. This can be removed from the beam if needed. The beam was then focused by an OAP (f=519 mm) and directed by several flat mirrors to the focus. The throughput from the input of SCExAO to the fibre-injection rig was 78% across J- and H-band without the PIAA and 43% with the uncoated PIAA optics and the lossy binary mask. However, with anti-reflection (AR)-coated PIAA lenses which were designed to not make use of the binary mask, most of the 78% throughput of the bench could be recovered in future. From <cit.>, the total throughput from the top of the sky to the plane of injection in H-band was ∼48% without and ∼26% with the current PIAA optics. The beam that was transmitted by the beamsplitter was then directed via a series of gold-coated OAPs and mirrors to the internal near-IR camera (Axiom Optics, OWL SW1.7HS). The internal near-IR camera was used to align the instrument in both the pupil and focal planes and was the camera that was run to flatten the wavefront using the asymmetric pupil Fourier wavefront sensor <cit.>. For optimum performance of this sensor in the context of this work, the detector should have been located in the same plane as the SMF to minimise the effect of non-common path aberrations. Unfortunately however, this was not possible in our setup. The optimum coupling was determined while the PSF was aligned on a so called hotspot on the camera. This hotspot was used as a reference and when the PSF was returned to this position, the coupling was maximised into the fibre. Although not used in these experiments, a low-order wavefront sensor (LOWFS) could be used in the future to stabilise the PSF in regards to the low-order modes especially tip/tilt. More information is provided in section <ref>.§.§ Fibre injection assemblyThe injection consists of a five-axis stage (Newport, M-562-XYZ and 562F-TILT) that allows the fibres to be translated around the focal plane, in X, Y, and Z with the aid of computer-controlled stepper motors (Zaber, T-NA08A25), which offered a minimum step size of 50 nm and an unidirectional repeatability of <1 μm. Tip/tilt can also be adjusted via manual micrometers and was used to align the core of the optical fibre with the optical axis. This was done by looking at the beam in the pupil when reverse injecting red light into the fibre and steering the axes until the Gaussian illumination was centered on the pupil. This was only done once when the rig was first installed. In front of the rig, a f=7.5 mm achromatic lens (Thorlabs, AC050-008-C) was used to alter the speed of the beam as it entered the fibre. The fibre positioning rig and the lens were co-mounted on a carriage that rides on two parallel rails. By moving the carriage along the optical axis, that is, changing the distance between the small lens and the OAP, it was possible to adjust the focal ratio of the injection. The approximate range of f/# achievable was from 3 to 25. Atop the rig, a bracket was mounted that could host three SMA connectorised fibres at any one time. A standard telecommunications-grade SMF was used for the experiments (Corning, SMF-28-J9) as well as a large core (365 μm, NA=0.22), step-index multimode fibre (MMF) (Throlabs - FG365LEC) for calibration purposes. Using the stage, it was possible to manoeuver each of the fibres into the focus of the beam in turn. To swap between fibres, the stage was simply translated laterally.The SMF-28-J9 fibre supported a 10.4 μm 1/e^2 mode field diameter at 1550 nm. From the simulations carried out in the top panel of Fig. <ref>, the size of the optimum PSF was determined with respect to the Gaussian mode of the fibre. This was used to calculate that the optimum focal ratio for coupling efficiently into SMF-28-J9 at 1550 nm was f/5.3±0.1 (we note this assumed that the apodised beam behaved as a Gaussian beam). The focal ratio of the beam was governed by the distance between the focusing OAP and the lens on the rig. To set the position of the small lens and hence obtain the desired f/#, the following equation was used s_lens=f_lens ( 1-f_oap/f#· d_oap ),where s_lens is the distance of the small lens from the original focus of the OAP, f_lens is the focal length of the small lens (7.5 mm), f_oap is the focal length of the OAP (519 mm) and d_oap is the diameter of the beam at the OAP (∼ 8 mm). This equation is based on the thin lens equation and was used to roughly position the small lens with respect to the OAP initially.§.§ PhotometerThe SMF and MMF were routed to the photometer, which is shown in Fig. <ref>. The aim was to reimage both beams onto a single photodetector. The MMF was first collimated (Thorlabs, F220FC-1550) before 10% of the light was transmitted through a 90:10 beam cube and refocussed (using Thorlabs, AC080-020-C and C230TMD-C) onto a sensitive photodetector. The InGaAs variable-gain photoreceiver was a single-pixel device with a 300 μm diameter and had a noise equivalent power of 7 fW Hz^-1/2 in the highest gain setting (FEMTO, OE-200-IN2). The light in the SMF was first passed through an achromatic 90:10 fibre splitter (Thorlabs, TW1550R2F2) where 10% of the light was fed to the photometer. The beam was also collimated (Thorlabs, F220FC-1550) before 90% was reflected by the beamsplitter cube directing it through the same focusing lenses onto the same detector. The dual lens focusing combination was used to ensure that the 365 μm core size of the MMF was cleanly imaged within the footprint of the 300 μm detector at all wavelengths. The 365 μm core fibre was chosen based on the results of simulations presented in section <ref>. A set of narrowband filters (25-50 nm bandwidths) were used in an automated filter wheel spanning J- and H-band that ranged from 1250 to 1600 nm. The 90% port of the splitter was routed towards a compact photonic spectrograph that is not discussed here (refer to <cit.> for further information). By using the two splitters in this way, it was possible to obtain similar levels for the flux between the SMF and MMF, which was a necessity when trying to keep both signals within the limited dynamic range of the detector.The MMF was used to calibrate the absolute flux in the SMF. This was done by first collecting a data set with the SMF and then moving the MMF into the focus, and dividing one signal by the other. This also meant only one fibre was in the focus at any one time (single source observations). Data sets of 15 s in duration were collected for each spectral channel and fibre type, with a sampling of 1 kHz. After a data set was collected, a background signal where no flux was landing on the detector was also collected. The background signal was averaged and then subtracted from the data before any further calculations were undertaken. The processed data was then used to determine the coupling efficiency presented in the following section.§ LABORATORY RESULTS §.§ Coupling efficiency into a SMFWith the aim of determining the practical limitations of coupling into a SMF from an apodised telescope pupil, a detailed laboratory characterisation was undertaken as outlined in this section. To ensure maximum coupling, the wavefront was first flattened using the asymmetric pupil Fourier wavefront sensor <cit.>. This was done in the H-band (from 1500–1650 nm) using the near-IR internal camera as outlined above. Although this minimised the chromatic residuals, it did not necessarily address non-common path aberrations as the detector was not located in a common focal plane with the fibre injection. This was a good starting point as all optics in SCExAO were specified to a flatness better than λ/20 at 633 nm (RMS) minimising non-common path errors by design (discussed further below).The light coupled into the SMF was optimised in X, Y, and Z using the photometer. The carriage supporting the injection was translated along the rails varying the f/#. At each point, the flux was optimised for the SMF and MMF and data sets (described above) were collected in all spectral channels. The coupling efficiency as a function of the focal ratio is shown in Fig. <ref>. We note that the loss due to Fresnel reflection from the input and output surface of the fibre (∼4%) has been removed from these values. The error bars are based on the fluctuation in the power during data acquisition. This is primarily comprised of the relative motion of the injected beam with respect to the SMF core due to vibrations at the Nasmyth platform and/or gusts from the air conditioning creating bench turbulence, but also consists of minor fluctuations in the light source as well. The figure highlights several key features. Firstly, the coupling efficiency was optimum at a narrow range of carriage positions (i.e. focal ratios) and degrades on either side of this. Although not directly validated, the peak in the curve for 1550 nm at ∼124 mm is likely to correspond to the optimum focal ratio of f/5.3 or thereabouts. Secondly, the coupling efficiency was not equal at all wavelengths. Indeed the highest coupling efficiency of ∼78% was at 1500/1550 nm and reduced to 55% at the shortest wavelengths. This chromatic coupling could be the result of a number of things. The first is a differential dispersion in the mode field diameter size and the beam size in the focal plane. The second is residual wavefront error, which is larger at shorter wavelengths. However, this could not account for the full 30% difference in coupling between H- and J-bands. Finally, it could be due to chromatic aberrations in the PIAA lenses. Although the lenses were fabricated from CaF_2, there may be some uncharacterised chromatic aberration. Regardless, the coupling efficiency was high across all wavelengths. This leads to the third point; the coupling efficiency almost reaches the theoretical limit of ∼91% (highlighted by the shaded-in red region in Fig. <ref>) taken from Table <ref>. Indeed, the data shows that a coupling of 86% of the theoretical limit was achieved. This minor loss in coupling could be attributed to some non-common path errors between the asymmetric pupil Fourier wavefront sensor detector plane and that of the fibre injection and/or imperfections in the PIAA optics. Static Zernike mode profiles were applied with varying amplitudes to the DM in linear combinations by a downhill simplex algorithm and it was determined that there were negligible non-common path errors to first order. A more careful scan with higher-order modes should be conducted. This may improve the coupling, but here we place an upper limit on the level of improvement that would be possible of a few percent at best. Imperfections in the PIAA optics are believed to be the dominant source of coupling efficiency degradation. However, it is difficult to determine if there are imperfections from the PSF image alone and use this to determine where it comes from and quantify the impact so we leave this as an item for a future exploration. Figure <ref> shows the results in the regime with a near perfectly flat wavefront. However, for use on-sky it is important to understand the performance of the injection as a function of the wavefront residuals after AO correction. This was tested in a laboratory setting by using SCExAO's turbulence simulator. An overview is presented in section <ref> while full details of this can be found in <cit.>. Data was collected with the photometer while the turbulence simulator was adjusted to operate over a range of settings corresponding to various levels of wavefront correction. Figure <ref> shows the coupling efficiency as a function of the measured Strehl ratio. The same result plotted as a function of the calculated RMS wavefront error is presented in Fig. <ref> in section <ref> for completeness. It can be seen that the coupling efficiency (η) increases linearly as a function of Strehl ratio, which is an important relationship to understand for future instrument development. The equation of the line of best fit for the data at 1550 nm is given by η = Strehl Ratio ×0.74+1.84 (%). We note the Strehl ratio needs to be input as a percentage and η will be expressed as one. Also, the error bars increase as the Strehl ratio decreases. This is because at lower Strehl, the instantaneous Strehl fluctuates as a result of the varying wavefront error, reducing the stability of the coupling. The theoretical maximum coupling efficiency for the Subaru Telescope pupil geometry assuming no apodisation was implemented is shown in both figures (blue horizontal line) for reference. By using the apodisation optics, we have surpassed the maximum coupling possible without the lenses, demonstrating this application of the lenses for the first time and justifying their use. Indeed, the performance of a perfectly optimised injection assuming the lenses were not used would be surpassed by a system using the lenses and a beam degraded to a Strehl ratio of 75% (between 1500 and 1600 nm). With ExAO levels of wavefront correction (90% Strehl ratio) a coupling efficiency as high as 67±2% could be achieved in the range of 1500–1600 nm. This is a highly efficient injection efficiency that could be justified in future instrumentation if access to an ExAO system were possible.Another useful value to keep in mind for several reasons is the coupling efficiency of 50%, which can be achieved with a Strehl ratio of 65%. Firstly, this value is within a factor of 1.8 of the theoretical maximum (91%) which is achieved by significantly improving the wavefront error and pushing the Strehl towards ∼90% on-sky (as well as addressing the unknown losses). To do this however, requires complex multi-layered wavefront control systems and median, or better than median seeing, free from telescope tracking limitations/vibrations. Sacrificing a factor of ∼2 in flux in order to minimise the complexity of the AO loop control and/or access nights without better than median seeing (i.e. more nights) would be an acceptable trade for most targets. Secondly, this value is within a factor of 2 of the maximum achievable for the MMF used in this experiment (much larger than those typically used for spectroscopy), and of the order of the same level of coupling typically obtained on-sky for MMF-fed spectrographs (this depends on how the PSF, AO-corrected or not, is matched to the core size of the MMF and varies on a case-by-case basis). Taking this into account it is clear that it would be beneficial to use a SMF instead of an MMF to feed a spectrograph and eliminate modal noise if operating in even a 50% Strehl-ratio regime, which can be achieved by conventional AO systems in good seeing conditions or at longer wavelengths.§.§ Coupling efficiency into a mode-converting photonic deviceA photonic lantern is a mode-converting device that splits the light from a single MMF to multiple SMF outputs <cit.>. The device was embraced by astronomers to convert the seeing-limited light of a telescope to multiple diffraction-limited beams so photonic components could be exploited <cit.>. In the context of this work, a photonic lantern would relax the requirements on mode matching as the input is a MMF but deliver diffraction limited performance. It is, therefore, interesting to see how it compares. To examine this, the coupling efficiency was measured for a device that consisted of seven SMFs, typically referred to as a 1 by 7 photonic lantern. The device was constructed by tapering down seven, SMF-28 fibres in a single fluorine-doped capillary tube. The core size of the MMF end was ∼39 μm, and based on this and the known refractive index contrast for the core, simulations predicted that the lantern supports the right number of modes to facilitate an efficient transition to the SMFs. An image of the photonic lantern used and a microscope image of the MMF end of the lantern are shown in Fig. <ref>. The SMFs at the output of the lantern were spliced to a V-groove array (an array which hosts several fibres in a linear array where the output facet of all fibres is terminated in a flat polish). The coupling efficiency for the lantern at 1550 nm was measured using a power meter instead of the photometer setup and the light was injected with the optimum focal ratio for the SMF (i.e. ∼5.3). The results are shown in Figs. <ref> and <ref> in grey. It is clear that the coupling efficiency has a near-linear dependence on Strehl ratio as was the case for the SMF, albeit with a shallower slope. Although the peak coupling efficiency in the diffraction limit is only 60% as compared to the 80% for the SMF, the coupling drops off slower as a function of reduced Strehl, and so even at very low Strehl (10% for example), a coupling efficiency of nearly 38% can be maintained. This is attributed to the fact that the lantern can support more than one mode enabling more efficient capture of the light when it is not well confined to the core of the PSF. It is important to clarify that the photonic lantern used in these experiments was not optimised at the fabrication stage for this work. Also, the focal ratio was not optimised for the experiments, and one would expect that a higher focal ratio would be more optimal to match to the fundamental mode of the lantern based on mode profile calculations for the tested device. In fact, the simulations revealed that the fundamental mode is not perfectly circularly symmetric, which could place an upper limit on the maximum coupling possible to the apodised beam, but the loss associated with this is not expected to be more than a few percent. Finally, the effect of optimising the wavefront at the entrance to the fibre with the DM was also not investigated. Therefore the peak coupling of 60% is a lower limit to what can be achieved. § ON-SKY RESULTSThe performance of the fibre injection was tested on-sky in March 2016 during the SCExAO engineering run. This was done to 1) validate the performance of the injection system in a real world setting and 2) to confirm that the relationship between Strehl ratio and coupling holds and can be exploited for planning observations. The PyWFS on SCExAO was undergoing commissioning at the time and was not fully operational, but did offer an improvement on the Strehl ratio over AO188 alone. It has achieved a Strehl ratio as high as 80% in the H-band but cannot currently support this level of performance in anything but better than median seeing conditions during periods free from vibrations. A preliminary data set collected during periods of strong telescope vibration on the 18^th of March can be found in section <ref>. On the night of the 21^st of March, 2016, the conditions were photometric and the seeing was 0.6" in V-band. The fibre injection was tested on Alpha Hydrae (K3 spectral type, R-mag=0.93, H-mag=-1.05). Before inserting the injection, an image of the PSF was recorded for 2 minutes to monitor wavefront control performance. The average Strehl ratio during this period was 69.0±3.4% (in H-band). After the fibre injection tests were complete, a single Strehl ratio measurement was made that indicated that the Strehl had dropped to 57.5%. The conditions may have deteriorated slightly during the period of data collection and therefore we take the average of these values, 63±7% to be the approximate Strehl ratio during the time of data collection. The star was steered onto the hotspot that was pre-registered on the internal NIR camera. The PIAA optics and mirrors were inserted to divert the light to the fibre injection. Flux was detected on the photometer. The fibre position was tweaked slightly to maximise flux. No active tracking of the fibre position was implemented. Numerous data sets were collected on this target and the best overall coupling over the entire spectrum at one instance in time is presented in blue in the upper panel of Fig. <ref>.The average coupling was ∼47% between 1500 and 1600 nm and tapered off slowly at shorter wavelengths as previously seen in the laboratory. However, a decent level of coupling was maintained down to the shortest wavelength. From Fig. <ref> a coupling efficiency of 47% at 1550 nm corresponds to an average Strehl ratio of 60% in H-band, which is consistent with the value computed from the focal plane images around the time of data collection. The laboratory data for a Strehl ratio of 60%, presented in Fig. <ref>, are overlaid on Fig. <ref> with a red line. A good agreement can be seen between the laboratory and on-sky data sets given the measured Strehl ratio of 60%, verifying that the relationship portrayed in Fig. <ref> holds on-sky. The middle panel in Fig. <ref> shows a typical time series in the 1550 nm spectral channel. Overlaid on the data is a solid line which represents the average coupling over the 15 s data collection window. Also, the red line shows the data with bins consisting of 150 data points each. The time series reveals that the coupling mostly varies between about 40 and 55% and undergoes rapid short-lived drops below this level (which corresponds to a Strehl ratio range of 52 to 74%). This can be seen clearly in Fig. <ref> which shows a histogram of the time series data. Indeed, the coupling efficiency was above 50% almost 41% of the time and above 40% for 84% of the time. At some point it even approached 60% (Strehl ratio of 80%). A PSD of the time series is shown in the bottom panel of Fig. <ref> and reveals no specific resonance during this data set.§ DISCUSSIONThe on-sky results are consistent with the relationship shown in Fig. <ref>. This is important as it means that one can use knowledge of the Strehl ratio to determine what level of coupling could be obtained from the fibre injection on any given night. The relationship in Fig. <ref> only applies to the fibre injection in SCExAO and its systematics and a similar relationship should be developed for any other AO system/fibre injection of interest. Once this relationship has been developed, it is possible to use it deterministically to calculate expected levels of coupling, which can be used in future instrument design concepts or even for data reduction.In addition, the results presented in Section <ref> highlight that the coupling efficiency on-sky is completely limited by Strehl. With higher levels of Strehl ratio available, higher coupling efficiencies can be achieved. These results also compare well with previous on-sky attempts at coupling light into SMFs with non-apodised beams. If the stability of the PSF in this work were improved by, for example, improving the sensitivity of the PyWFS to tip/tilt, deploying a Linear Quadratic Gaussian (LQG) control loop, which is designed to use predictive control to drive the PyWFS to notch out any resonances <cit.>, and/or running the LOWFS, the coupling would be more stable with time. This would also make it easier to tweak and optimise the alignment of the fibre on-sky in order to maximise the flux. These are future upgrades that will no doubt improve performance, but we have shown with preliminary on-sky data that even with a non-fully operational extreme AO system, efficient coupling into a SMF from a large telescope (8-m class) and apodisation optics is possible and efficient. The data presented in Fig. <ref> allow a future instrument designer to consider the relative merits of a SMF versus a photonic lantern for their instrument feed for spectroscopy. The photonic lantern subdivides the flux on average by the number of SMF ports. This means that although the coupling is relatively high (38%) into the lantern at a 10% Strehl ratio, the signal in each port is a factor of 7 lower (equivalent to a coupling of ∼5% into each of the SMF ports) when the light is evenly distributed amongst the cores. This in fact is lower than the coupling directly into a SMF with the same Strehl ratio (10%) and hence the S/N for a single SMF would be higher than that for each of the ports of the lantern. This raises the first important point: One must carefully consider the dominant noise source in their observations. If the star is bright and photon noise dominates, then the seven ports of the lantern could be co-added together in post-processing and the S/N would be improved by a factor of seven. On the other hand, if read noise is the dominant noise source, then co-adding the signal in the ports would improve the S/N by √(N) or 2.6 in this case. Of course one must consider the dark noise as well and ensure that the S/N in each lantern port is sufficient to be above the dark noise. The second important point is that a lantern, by default, requires more pixels than a SMF for an equivalent instrument specification (i.e. spectral sampling). The example of even illumination amongst the seven cores used above is a little unfair in that the coupling to the fundamental mode of the lantern will be greater for higher Strehl ratios (i.e. when there is a PSF core it will have a higher overlap with the fundamental mode than to the others). The fundamental mode of the multimode end of the lantern has a near perfect correspondence to a single SMF core at the output based on symmetry and an optimised device design. This means that one port will have a much better S/N than the others and could potentially be on par with a SMF. This should be taken into consideration as well when choosing the optimum collection device for an instrument. This illustrates that although photonic lanterns are the only efficient counter option to a SMF for getting the light into a diffraction-limited mode at Strehl ratios <10%, it is important to consider the dominant noise process and pixel availability. However, new photonic lantern developments in the area of few-mode optical communications may offer a more optimised device design. For instance, mode-selective photonic lanterns <cit.> could enhance coupling conditions into a smaller number of SMFs (1-3 compared to 7 in the current study) while maintaining the demonstrated higher coupling efficiencies at lower Strehl ratios. § SUMMARYIn this body of work, we outline and demonstrate how the combination of lossless pupil apodisation optics and advanced wavefront control can be used to efficiently inject light into a single-mode device, be it a fibre or waveguide architecture, for the first time. The PIAA optics used for pupil apodisation increased the theoretical coupling efficiency of the telescope beam into a SMF from 59% to 91%. In a laboratory setting, we achieved a coupling efficiency at 1550 nm of 86% of the theoretical limit. A study was conducted into the effect of Strehl ratio on coupling and it was determined that there was a linear correlation. This trend was reinforced by the on-sky data allowing for the expected coupling to be determined from the Strehl ratio which is a powerful tool for future instrument design. For ExAO levels of Strehl ratio (90% in H-band) the coupling efficiency should be above 67%, which is similar to a typical MMF injection efficiency for currently operating spectrographs. For high levels of AO correction (Strehl ratio ∼50% in H-band), possible in better-than-median seeing conditions on most AO systems (or at longer wavelengths), a coupling of 40% can be achieved. For standard AO correction (Strehl ratio 20–40%), it should be possible to achieve >18% coupling, which is sufficient for brighter targets. Few-port photonic lanterns become the obvious choice over SMFs for Strehl ratios <20%, but the S/N and pixel availability must be taken into consideration. These devices should not be overlooked for use in the high-Strehl-ratio regime as well. The preliminary on-sky SMF results are encouraging with a 15 s time-averaged coupling efficiency of 47% achieved around 1550 nm. In this case the coupling was >40% at 1550 nm 84% of the time. The results clearly show that the coupling efficiency into a SMF is no longer a barrier to exploiting diffraction-limited devices for spectroscopy or interferometry. Future instrument designs should consider the possibility of employing a SMF feed to their instrument to take full advantage of the spatial filtering and temporally invariant PSF that this offers. Indeed, the next-generation high-resolution spectrograph for the LBT, iLocator, is doing just this <cit.>. It will be the first instrument to be purpose built to use a SMF feed. A comprehensive overview of the advantages this presents in the context of stellar spectroscopy was recently given by <cit.>. In addition, the KPIC project at the Keck Telescope will go one step further and exploit the superior speckle rejection properties of a SMF to feed NIRSPEC with SMFs to enable the direct characterisation of known exoplanets <cit.>. Finally, embracing a SMF-feed in future instrument design will also help to minimise the footprint of the instrument through the ability to exploit advanced photonic technologies. The authors acknowledge support from the JSPS (Grant-in-Aid for Research #23340051, #26220704 #23103002). This work was supported by the Astrobiology Center (ABC) of the National Institutes of Natural Sciences, Japan and the directors contingency fund at Subaru Telescope. This research was also supported by the Australian Research Council Centre of Excellence for Ultrahigh bandwidth Devices for Optical Systems (project number CE110001018). The authors wish to recognise and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. § APPENDIX§.§ Limit of coupling an Airy beam into a SMFTo understand this limit, it is instructive to study the bottom panel of Fig. <ref>, which shows the cross-sectional line profile of an Airy pattern (red curve) and the corresponding best-overlap Gaussian (yellow curve). The profile of the optimum Gaussian is nearly 0 (<1%) around the first null in the Airy pattern. This indicates that optimum coupling is achieved when only light from the PSF core is collected by the fibre. The reason behind this is that the light in the first ring is out of phase with that of the PSF core, as described. Hence, if it were to be injected into the Gaussian mode, it would lead to destructive interference and a reduction in the coupling efficiency (first described by <cit.> and <cit.>). Therefore, optimum coupling between an Airy pattern and a Gaussian occurs when the Gaussian is well matched to the size of the Airy core, but does not include light from the rings. §.§ Eliminating losses due to spidersA method for eliminating the presence of the spiders was proposed by <cit.>. They outlined the concept of a spider removal plate which consists of four equally thick, flat pieces of glass that are used to shift the four illuminated regions of the pupil (between the spiders) inwards reducing the effective size of the spiders. In practice this is done by tilting each one of the parallel pieces of glass at a constant angle with respect to the optical axis in a collimated space. However, the device they demonstrated introduced aberrations to the beam so it was deemed unfit for use and has therefore not been used in the work presented here either. With improvements in the manufacturing process this could be used in future. §.§ Low-order wavefront sensing The LOWFS uses the light diffracted by a mask in the focal plane down stream of the beamsplitter to monitor the changes to the shape of the PSF <cit.>. The diffracted light is reflected by a reflective Lyot stop towards the LOWFS camera, operating with light from Y- to H-band. This is primarily used for coronography and has achieved an on-sky tip/tilt residual of 0.15 mas in H-band <cit.>. The ability to stabilise the tip/tilt at the wavelength of operation, close to the focal plane of injection and at a speed of up to 170 Hz is valuable for efficient long term coupling and will be explored in detail in the future.§.§ Absolute flux calibrationThe MMF was used to calibrate the absolute flux in the SMF. This was done by first collecting a data set with the SMF and then moving the MMF into the focus, and dividing one signal by the other. This also means only one fibre was in the focus at any one time (single source observations). Simulations were carried out to see what size of fibre would be sufficiently large enough to capture the entire PSF, including the diffraction features of the spiders as well as light spread into the halo by, for example, bad seeing/bad AO correction. It was determined that 99% of the flux could be collected by a 365 μm core size fibre as shown in Fig. <ref>, and using larger fibres would not improve this by much but would complicate the optics in the photometer setup. Therefore this size was chosen. The fibre had an approximate extent of 2.4" on-sky. This is approximate as the fibre is used in a post-apodised plane and there is a pupil size and shape change upon remapping that needs to be accounted for accurately.§.§ Turbulence simulator and measuring the Strehl ratioThe turbulence simulator uses the DM to generate a constantly moving phase screen, based on a Kolmogorov turbulence profile. The wind speed can be controlled by changing the speed of the phase map passing over the DM. The amplitude of the phase errors can be controlled by the amplitude applied to the DM. Also, the amount of low-order aberrations can be controlled with a single scaling coefficient, that is, a multiplicative factor applied to the lowest Fourier modes of the Kolmogorov turbulence profile. In this way it is possible to mimic the effect of an AO system that typically operates on the lower-spatial-frequency modes of the atmosphere.To accurately determine how much wavefront error was applied in each case, non-PIAA images were collected on the internal NIR camera for each setting of the turbulence simulator. A cube of 1000 images was first dark subtracted before collapsing into a single frame corresponding to an equivalent single long integration time image (of ∼5 s duration). From this, a well calibrated Strehl ratio calculator was used to extract the Strehl ratio. Using Marechal's approximation for the Strehl ratio given bySR∼1/e^(2πω)^2,where ω is the normalised wavefront error at a given wavelength, the amount of RMS wavefront error was determined. The RMS wavefront error was calculated at a wavelength of 1550 nm.§.§ Coupling into a SMF versus RMS wavefront errorThe coupling efficiency as a function of the RMS wavefront error is shown in Fig. <ref>. This way of presenting the data is instrumental for people who build AO systems and think in units of RMS wavefront error. Overlaid in Fig. <ref> is Marechal's approximation to the Strehl ratio given the RMS wavefront error (black dashed line). Unsurprisingly this is a good fit to the coupling measurements as this equation was used to convert the Strehl ratio to wavefront error in the first place, but nonetheless reassuring to confirm. In the ExAO regime, where the wavefront error is <80 nm RMS or that is the wavefront flatness is <λ/19 at 1550 nm, a coupling >67% can be achieved as outlined above. For standard AO levels of correction, where the wavefront error is <300 nm RMS or that is the wavefront flatness is <λ/5 at 1550 nm, a coupling >18% can be achieved. There is clearly an advantage to exploiting an ExAO system for injecting into a SMF, however, for the brightest targets it is possible to use a conventional AO system for this application as well. Indeed, in median seeing where a conventional AO system can achieve 200 nm RMS wavefront error the coupling could be as high as ∼40% if PIAA optics are used. By doing so this opens the possibility of employing these techniques on many more telescopes.§.§ Preliminary on-sky dataAlpha Bootis (Arcturus, K0 spectral type, R-mag=-1.03, H-mag=-2.81) was observed on the night of the 18^th of March. The seeing varied between 0.6" and 1.0" in V-band throughout the night. The average Strehl ratio was measured immediately following data collection with the photometer and determined to be 46.3±2.1% in H-band. We note that the Strehl ratio could not be measured during data acquisition with the photometer. The same alignment process was used as described for the observations on the 21^st of March.The coupling efficiency as a function of wavelength is shown in the upper panel of Fig. <ref>.It can be seen that the coupling reaches 24% between 1500 and 1600 nm. The error bars are much larger than those taken in the laboratory indicating a huge fluctuation in the coupling as a function of time. The middle panel of Fig. <ref> shows a time series of the coupling efficiency for the 1550 nm spectral channel. The coupling efficiency seems to indeed be heavily modulated, so much so that at some points the coupling is almost zero and at others it is as high as 40%. The dark black line indicates the mean value of the time series which represents the data point at 1550 nm in the top panel. The line is positioned halfway up the data set, which is consistent with a strong modulation of the flux. A power spectral density (PSD) plot is offered in the lower panel of Fig. <ref> which clearly shows that a strong periodic modulation is present at 6.3 Hz. Otherwise the spectrum is near featureless. Accelerometers attached to the top ring of the telescope reveal the source of this vibration. Figure <ref> shows the PSD as a function of time for the accelerometer for the elevation axis of the telescope.A strong resonance can clearly be seen at 6.3 Hz consistent with the results from the photometer. These are known telescope vibrations that manifest as tip/tilt jitter in the focal plane of the detectors and fibre injection and cause the coupling to oscillate <cit.>. These vibrations are persistent despite both the AO188 and PyWFS control loops operating. During this observation we did not have a working LQG or an operational LOWFS. Both of which would help in the future for mitigating these vibrations. The interesting feature here is the amplitude of the vibration. The fact that the coupling was modulated by almost 100% would indicate that either the PSF was optimally aligned but had a peak-to-peak motion of the order of 3 to 4 λ/D (i.e. 1.5–2 full width at half-maximums in radius), or that the fibre was not optimally aligned in the first instance and the peak-to-peak modulation was 1 to 2 λ/D, post AO correction. Interestingly, the measured Strehl ratio of ∼46% in H-band should deliver a coupling efficiency at 1550 nm of ∼36% from Fig. <ref>. Indeed, if one were to remove the modulation due to vibrations, the coupling efficiency seems to peak around this value (middle panel in Fig. <ref>), highlighting that results in Fig. <ref> can be used to infer the expected coupling from the Strehl ratio.[Bechter et al. (2016)]bechter2016 Bechter, A., Crass, J., Ketterer, R., et. al. 2016, , 9909, 99092X-11 [Bland-Hawthorn et al. (2011)]jbh2011 Bland-Hawthorn, J., Ellis, S. C., Leon-Saval, S. G., et. al.2011, Nat. Comms., 2, 581. [Coudé du Foresto & Ridgway(1992)]Coude1992 Coudé du Foresto, V., & Ridgway, S. T. 1992, European Southern Observatory Conference and Workshop Proceedings, 39, 731[Coudé du Foresto(1994)]Coude1994 Coudé du Foresto, V. 1994, Very High Angular Resolution Imaging, 158, 261[Coudé du Foresto et al.(2000)]Coude2000 Coudé du Foresto, V., Faucherre, M., Hubin, N., & Gitton, P. 2000, , 145, 305[Chazelas et al.(2012)]cha2012 Chazelas, B., Pepe, F., & Wildi, F. 2012, , 8450, 845013 [Crepp et. al.(2014)]crepp14 Crepp, J. R. 2014, Science, 346, 809. [Crepp et. al.(2016)]crepp16 Crepp, J. R., Crass, J., King, D., et. al. 2016, , 9908, 990819-13 [Cvetojevic et al.(2009)]Cvetojevic2009 Cvetojevic, N., Lawrence, J. S., Ellis, S. C., et al. 2009, Optics Express, 17, 18643[Cvetojevic et al.(2012)]Cvetojevic2012 Cvetojevic, N., Jovanovic, N., Betters, C., et al. 2012, , 544, L1[Dekany et al.(2013)]Dek2013 Dekany, R., Roberts, J., Burruss, R., et al. 2013, , 776, 130[Feger et al.(2014)]Feger2014 Feger, T., Ireland, M. J., Bento, J., & Bacigalupo, C. 2014, , 9147, 914780 [Ghasempour et al.(2012)]ghas2012 Ghasempour, A., Kelly, J., Muterspaugh, M. W., & Williamson, M. H. 2012, Proc. of SPIE, 8450, 845045 [Guyon(2003)]guyon2003 Guyon, O. 2003, , 404, 379 [Halverson et al.(2014)]Hal2014 Halverson, S., Mahadevan, S., Ramsey, L., et al. 2014, , 9147, 91477Z[Harris et al.(2015)]Harris2015 Harris, R. J., MacLachlan, D. G., Choudhury, D., et al. 2015, , 450, 428[Jovanovic et al.(2014)]jov2014 Jovanovic, N., Guyon, O., Martinache, F., Schwab, C., & Cvetojevic, N. 2014, Proc. of SPIE, 9147, 91477P[Jovanovic et al.(2015)]jov2015 Jovanovic, N., Martinache, F., Guyon, O., et al. 2015, , 127, 890[Jovanovic et al.(2016a)]jov2016a Jovanovic, N., Cvetojevic, N., Schwab, C., et al. 2016a, , 9908, 99080R[Jovanovic et al.(2016b)]jov2016b Jovanovic, N., Schwab, C., Cvetojevic, N., Guyon, O., & Martinache, F. 2016b, , 128, 121001[Leon-Saval et al. (2013)]saval2013 Leon-Saval, S. G., Argyros, A., & Bland-Hawthorn, J., 2013, Nanophot., 2, 429.[Leon-Saval et al. (2014)]saval2014 Leon-Saval, S. G., Fontaine, N. K., Salazar-Gil, J. R., et al., 2014, Opt. Express, 22, 1036 [Lozi et al.(2009)]lozi2009 Lozi, J., Martinache, F., & Guyon, O. 2009, , 121, 1232[Lozi et al.(2016)]lozi2016 Lozi, J., Guyon, O., Jovanovic, N., et al. 2016, , 9909, 99090J[Macintosh et al.(2014)]Mac2014 Macintosh, B., Graham, J. R., Ingraham, P., et al. 2014, Proceedings of the National Academy of Science, 111, 12661[Martinache et al.(2016)]Mar2016 Martinache, F., Jovanovic, N. & Guyon, O., 2016, A&A, 593, A33. [Matijevich et al.(2016)]mat2016 Matijevich, R., Johansson, E., Johnson, L., Cavaco, J., & National Solar Observatory 2016, American Astronomical Society Meeting Abstracts, 227, 146.21[Mariën et al.(2012)]Marien2012 Mariën, G., Jovanović, N., Cvetojević, N., et al. 2012, , 421, 3641[Mawet et al.(2017)]dimitri2017 Mawet, D., Ruane, G., Xuan, W., et al. 2017, , 838, 92 [Mennesson et al.(2010)]mennesson2010 Mennesson, B., Hanot, C., Serabyn, E., et al. 2010, , 7735, 773511[Minowa et al.(2010)]Minowa2010 Minowa, Y., Hayano, Y., Oya, S., et al. 2010, , 7736, 77363N[Oppenheimer et al.(2003)]opp2003 Oppenheimer, B. R., Sivaramakrishnan, A., & Makidon, R. B. 2003, Astrophysics and Space Science Library, 289, 155[Perrin et al.(2006)]perrin2006 Perrin, G., Woillez, J., Lai, O., et al. 2006, Science, 311, 194 [Poyneer et al.(2014)]poyneer2014 Poyneer, L. A., De Rosa, R. J., Macintosh, B., et al. 2014, , 9148, 91480K[Schwab et al.(2015)]Schwab2015 Schwab, C., Stürmer, J., Gurevich, Y. V., et al. 2015, , 127, 880[Shaklan & Roddier(1988)]Shaklan1988 Shaklan, S., & Roddier, F. 1988, , 27, 2334 [Singh et al.(2014)]singh2014 Singh, G., Martinache, F., Baudoz, P., et al. 2014, , 126, 586 [Singh et al.(2015)]singh2015 Singh, G., Lozi, J., Guyon, O., et al. 2015, , 127, 857[Spaleniak et al.(2013)]Iza2013 Spaleniak, I., Jovanovic, N., Gross, S., et al. 2013, Optics Express, 21, 27197[Trinh et al.(2013)]Trinh2013 Trinh, C. Q., Ellis, S. C., Bland-Hawthorn, J., et al. 2013, , 145, 51[Vigan et al.(2016)]Vigan2016 Vigan, A., Bonnefoy, M., Ginski, C., et al. 2016, , 587, A55[Wagner & Tomlinson (1982)]wag1982 Wagner, R. E., & Tomlinson, W. J., 1982, Appl. Opt. 21, 2671 [Wang et al.(2017)]Wang2017 Wang, J., Mawet, D., Ruane, G., Hu, R., & Benneke, B. 2017, , 153, 183 [Wizinowich et al.(2000)]Wiz2000 Wizinowich, P., Acton, D. S., Shelton, C., et al. 2000, , 112, 315
http://arxiv.org/abs/1706.08821v2
{ "authors": [ "N. Jovanovic", "C. Schwab", "O. Guyon", "J. Lozi", "N. Cvetojevic", "F. Martinache", "S. Leon-Saval", "B. Norris", "S. Gross", "D. Doughty", "T. Currie", "N. Takato" ], "categories": [ "astro-ph.IM", "physics.optics" ], "primary_category": "astro-ph.IM", "published": "20170627124233", "title": "Efficient injection from large telescopes into single-mode fibres: Enabling the era of ultra-precision astronomy" }
1Institute for Digital Communications,University of Edinburgh, Edinburgh, EH9 3JL, UK 2Department of Oncology Physics, Edinburgh Cancer Centre,Western General Hospital, Edinburgh, EH4 2XU, UKPolyquant CT: direct electron and mass density reconstructionfrom a single polyenergetic source Mike E. Davies1 December 30, 2023 ================================================================================================= Quantifying material mass and electron density from computed tomography (CT) reconstructions can be highly valuable in certain medical practices, such as radiation therapy planning. However, uniquely parameterising the X-ray attenuation in terms of mass or electron density is an ill-posed problem when a single polyenergetic source is used with a spectrally indiscriminate detector. Existing approaches to single source polyenergetic modelling often impose consistency with a physical model, such as water–bone or photoelectric–Compton decompositions, which will either require detailed prior segmentation or restrictive energy dependencies, and may require further calibration to the quantity of interest. In this work, we introduce a data centric approach to fitting the attenuation with piecewise-linear functions directly to mass or electron density, and present a segmentation-free statistical reconstruction algorithm for exploiting it, with the same order of complexity as other iterative methods. We show how this allows both higher accuracy in attenuation modelling, and demonstrate its superior quantitative imaging, with numerical chest and metal implant data, and validate it with real cone-beam CT measurements. § INTRODUCTIONPhysically quantifying reconstructions from CT is highly valuable in some medical practices. In radiation therapy for example, the electron density inferred from CT images of the patient allows the dose deposition from the treatment beam to be accurately modelled <cit.>. Additionally, quantifying the bone mineral density allows osteoporosis to be characterised and the risk of bone fractures to be assessed <cit.>. Due to the non-linear energy-dependence of X-ray attenuation however, mapping from a set of raw measurements to a consistent physically quantitative reconstruction is not straight forward, and requires both actively accounting for the polyenergetic effects such as beam hardening, and establishing a relation to the quantity of interest.Mapping from CT to physical density is usually treated in a three step process: linearising the measurements by correcting for scatter, taking the logarithm <cit.> and non-linearly calibrating from a polyenergetic to approximate monoenergetic source <cit.>; inverting the linearised projections with analytic or iterative reconstruction algorithms <cit.>; then finally applying a non-linear calibration to mass or electron density <cit.>.In general, the mapping from a single polyenergetic to a monoenergetic measurement is an ill-posed problem, since the spectral information of the irradiated material is lost with the standard energy integrating detectors <cit.>. Whilst imaging the human body however, most tissues may be reasonably modelled with few distinct materials: namely water and bone as in <cit.>. If the amount of each material in a projection is known a priori from a water–bone segmentation, then it is possible to approximately reverse the polyenergetic effects through polynomial fitting or with a look up table. For more accuracy, this may also be brought into the reconstruction model, where <cit.> demonstrate statistical quantitative imaging of mass density, with the water–bone assumption. Given that the composition is itself highly correlated with density, then the prior segmentation may not be necessary, but estimated during reconstruction <cit.>.Another approach is to model the attenuation explicitly in terms of physical processes, given quantitative physical parameters of interest. One such choice is to model the photoelectric and Compton effects in terms of relative atomic number and electron density, which is reasonably accurate for relevant elements and energies <cit.>. Given measurements from two sufficiently different spectra—a technique known as dual-energy CT (DECT)—a projection of Compton attenuation can be uniquely determined <cit.>, of which electron density is an analytic function <cit.>. This DECT technique effectively bypasses the need for assumptions such as water–bone compositions, so should be applicable to a wider range in materials, although one faces practical difficulties in generating the two spectra <cit.>. Additionally, one will expect a loss in accuracy between significantly different elemental compositions, such as soft tissues, bone and metallic implants, since the two parameter model is not consistent over a wide range in atomic species <cit.>.In <cit.>, the authors introduce an iterative maximum-likelihood polychromatic algorithm for CT (IMPACT), which models the energy independent factors in the photoelectric–Compton model from <cit.> as piecewise-linear functions of monoenergetic attenuation, allowing reconstruction from a single source. This method does not require any prior segmentation, and is reported on a wide range of materials including metallic implants. By imposing the energy dependence of the two parameter model however, the physical consistency will also degrade throughout diverse material types, due to the inconsistency of these parameterisations in effective atomic number and electron density <cit.>.The second conversion from either reconstructed attenuation in Hounsfield units (HU) in <cit.> or mass density in <cit.> to electron density is also an ill-posed problem. This is because X-ray interaction depends on the environment of the electrons as well as their density, and will vary considerably with atomic number for imaging energy ranges <cit.>. Although, again given the fact that most human tissues have similar properties, a single piecewise linear fitting is reasonably accurate in practice <cit.>, though it will not be consistent with synthetic materials such as some plastics <cit.>.In both the photoelectric–Compton and a material decomposition such as the water–bone model, there is some degree of fitting to materials and energies of interest. Instead, one could use a purely data-centric approach. Here, given a representative set of substances, one could use a model that accurately parameterises the energy dependent attenuation in terms of the quantity of interest, without necessarily any physical justification. One such method is to model the energy dependent attenuation as a piecewise-linear function of quantitative density, which may be fit to a set of materials of interest. When the transitions between linear sections are independent of energy we also get the nice property that the computation order in an iterative method is independent of the number of energies considered.In this article, we will study in detail the piecewise linear quantitative model for CT, and will show how it may be incorporated in a regularised iterative reconstruction algorithm. Specifically, this provides a generalised method for directly quantifying the electron or mass density of a heterogeneous specimen, and is also able to model hard metallic structures without any prior segmentation. In <cit.> we presented a preliminary study using this idea for the specific case of calculating electron density for radiotherapy planning.In preparation of this manuscript, we became aware of the commercial method DirectDensity™ from Siemens Healthineers® <cit.>, which reports direct reconstruction into relative electron density from a single polyenergetic source. This is a preprocessing technique combining bone detection with a projection-based material decomposition similar to <cit.>. In this article we compare against the bone–model of <cit.>, which has been shown to itself provide superior performance to that of <cit.>. §.§ ContributionsWe establish and analyse a general modelling technique to allow direct quantitative reconstruction from a single polyenergetic source, where we study the cases of electron and mass density imaging. Unlike existing approaches of fitting to physical parameterisations of attenuation, such as bone–water or photoelectric–Compton, we fit directly to the data, which we demonstrate is more accurate over a wide range of biological tissues. We show how this model may be incorporated into statistical reconstruction, and propose an algorithm for performing this that allows further convex spatial regularisation to be used. By design, the complexity of using our model does not scale with the number of discrete energies, and will have an order of computational cost 2.5× that of standard monoenergetic iterative algorithms. As another consequence of the fitting constraints, one could also quantify the equivalent attenuation from a mono-energetic source—known as `quasi-monoenergetic' in DECT <cit.>—though this is not evaluated in this work.We also demonstrate how this model may also directly mitigate metal artefacts, without any need for segmentation. §.§ Article StructureWe begin this article with background material in Section <ref> on X-ray attenuation, existing polyenergetic parameterisations, and the probabilistic measurement model we will invoke for reconstruction. We then propose our generalised data centric model in Section <ref>, and outline the cases of electron and mass density quantisation for biological tissues, as well as synthetic materials and metal implants. In Section <ref> we demonstrate how to utilise this model in statistical reconstruction, and detail one such algorithm in Section <ref>. The experimentation in Section <ref> evaluates our method with: a model accuracy test in Section <ref> against other physical parameterisations; a numerical reconstruction test in Section <ref> with simulated fan-beam CT of a chest and pelvis with metallic hip implants; and reconstruction validation on a real physical phantom scanned with cone-beam CT. We then discuss important considerations in Section <ref> leading to conclusions in Section <ref>.§ BACKGROUND§.§ The CT measurement modelIn CT, one is able to observe a specimen's attenuation through the radiation intensity after transmission. The magnitude of this is found from the Beer–Lambert law, given for a monoenergetic beam asI_out = I_inexp(-∫_ℓμ(ℓ)dℓ),where I_in is the incident intensity, ℓ is the line-of-sight path of the beam through specimen, I_out is the output intensity one is able to measure. Since in practice, μ is energy dependent and typically the source is polyenergetic, the output intensity becomesI_out = ∫_ξI_in(ξ)exp(-∫_ℓμ(ℓ,ξ) dℓ) dξ. For a finite number of photons, the measured intensity will be probabilistic with an approximate Poisson distribution <cit.>. If we also move the attenuation, measurements and energy spectrum into a discretised setting, we can write the measurement process asy_i ∼Poisson{∑_j=1^N_ξb_i(ξ_j)exp(-[Φμ(ξ_j)]_i)+s_i }i=1,...,N_ray,where N_ray is the number of CT measurements, N_ξ is the number of energy bins, b(ξ)∈ℝ^N_ray is a vector of incident intensities, μ(ξ)∈ℝ^N_vox is the vector of attenuation coefficients with N_vox the number of voxels, Φ∈ℝ^N_ray× N_vox is the system matrix describing the summation along the paths from source through specimen onto each detector, and s∈ℝ^N_ray is the expectation of the scatter or other background noise reaching the detector. §.§ Material dependent X-ray attenuationThe mechanism that allows various regions in a heterogeneous specimen to be differentiated is their degree of X-ray attenuation. For biological tissues irradiated with a diagnostic X-ray source, the significant phenomena contributing to the attenuation of incident radiation are photoelectric and scattering effects—consisting of Compton, Rayleigh and Thompson scatter <cit.>. The combined attenuation strength of a given material can be quantified asμ(ξ) = ρ N_g(σ_pe(ξ)+σ_incoh(ξ)+σ_coh(ξ)),where ξ is the energy of the incident radiation, σ_pe, σ_incoh and σ_coh represent the interactive cross sections—quantifying the probability of interaction—of photoelectric, incoherent (Compton) and coherent (Rayleigh and Thompson) effects, ρ is the mass density, and N_g is the number of electrons per unit volume defined asN_g = N_A∑_iω_iZ_i/A_i,where N_A is Avagadro's number, and Z_i, A_i, ω_i are atomic number, atomic weight and relative fraction by mass of a material's constituent elements <cit.>. A convenient parameter to use is the relative electron density, which isρ_e = ρ Ng/ρ_waterN_g,water,where N_g,water is the absolute electron density of water and ρ_water its mass density, having a value of almost exactly ρ_water=1 g/cm^3 at room temperature.From (<ref>), (<ref>) and (<ref>), one may quantify the X-ray attenuation in terms of the energy independent mass density or relative electron density—ρ and ρ_e respectively—given knowledge of the energy dependent interactive cross sections, which may be found from existing parameterisations in Section <ref> or our proposed model in Section <ref>. A more common approach however, is simply to use calibration curves on the reconstructed images. Using the Hounsfield scale defined asHU=1000μ-μ_water/μ_water-μ_air,examples of calibration curves are shown in Figure <ref>.The calibration curves in Figure <ref> are generated from 52 human tissues defined in <cit.> and used in <cit.>, according to a monoenergetic X-ray source at 60 keV. As in <cit.>, the curves are piecewise linear, with 4 breakpoints at lung tissue, adipose tissue and around soft tissues, although these are some of these are difficult to discern in the figure. It can observed that most tissues have very similar relative electron density and mass density, which deviates more significantly with higher attenuating materials containing bone.In order to use the calibration curve in Figure <ref>, the attenuation should be converted to a monoenergetic equivalent, and this is often approximated prior to reconstruction as in <cit.>, which means performing nonlinear calibration both before and after reconstruction, to map into mass or electron density. The alternative that we consider here is to use explicit parameterisations of the attenuation. §.§ Existing Physical Parameterisations Due to each cross-section in (<ref>) being a non-linear function of energy and material, the total attenuation of a tissue is complicated and difficult to quantify exactly. One approach is to parameterise (<ref>) as a linear combination of basis functions. For example, in <cit.> a convenient choice isμ(ξ) = K_1ρ_eZ_eff^nξ^-3_photoelectric + K_2ρ_ef_KN(ξ)_Compton scatter,where f_KN(·) is the Klein–Nishina function <cit.> describing the probability of Compton scatter, Z_eff is the effective atomic number for a composite material <cit.>, and K_1, K_2 and n are scalar parameters to fit the model to data. It should be noted that for unbound electrons, one would have σ_incoh=f_KN(·). Additionally, the coherent scattering events are not explicitly modelled, though their contribution is low at the energies of interest <cit.>.According to (<ref>), the energy dependent attenuation of any material may be parameterised by its effective atomic number and electron density. Several examples of these parameters for a range of differing material types are plotted in Figure <ref>.Although there is clearly a strong correlation between the relative electron density ρ_e and effective atomic number Z_eff, in order to unambiguously model all classes of materials, at least two spectral measurements must be taken to separate the contribution from the two terms in (<ref>)—since there is no one-to-one mapping from ρ_e to Z_eff. This is the basis of a DECT technique, where from the model in (<ref>) and measurements from two distinct X-ray spectra, the attenuation can be decomposed into photoelectric and Compton, from which ρ_e could be unambiguously calculated <cit.>.In <cit.>, the authors use a piecewise-linear fit from a monoenergetic equivalent attenuation to both of the energy independent factors in (<ref>): K_1ρ_eZ_eff^n for photoelectric attenuation; and K_2ρ_e for Compton scatter. This fitting allows polyenergetic reconstruction from a single source. Due to the degradation of the energy dependent modelling in (<ref>) at higher effective atomic number or higher energy however <cit.>, this will not be quantitatively consistent in ρ_e between hard and soft materials; the model is also shown not to be consistent between synthetic and biological materials <cit.>. Another idea is to use physical materials as basis functions. For example, for biological specimens, water and bone may be considered <cit.>. The attenuation is thenμ(ξ) = a_1μ_water(ξ)+a_2μ_bone(ξ) = ρ(a_1m_water(ξ)+a_2m_bone(ξ)),where ρ is the mass density, m_water(ξ) and m_bone(ξ) are the energy dependent mass attenuation coefficients, and a_1 and a_2 can be binary <cit.> or water–bone fractions <cit.>.One may calculate ρ and map into ρ_e through a non-linear calibration if desired. The relation between mass and electron density is shown in Figure <ref>. Although higher density materials deviate from that of water, as long as one can generate an estimate of the mass density, then the trend shown in Figure <ref> can be approximated as piecewise-linear and conversion to electron density is possible, which is similar to the HU against density plot shown in Figure <ref>. A possible weakness of this model is the inaccuracies that will occur when tissues have a dissimilar mass-attenuation profile to both water or hard bone, such as adipose tissue <cit.>.§ METHODOLOGY§.§ Polyquant Attenuation Model We propose to take a data centric rather than a physical approach to parameterising the X-ray attenuation. To motivate our choice for this, we have plotted the relative attenuation coefficient against electron and mass density for 52 biological tissues in <cit.> at a number of energies in Figure <ref>. We have normalised the attenuation to the maximum for an energy—the tooth in each case—simply to allow visualisation on a single graph. It appears from Figure <ref> that the chosen biological tissues approximately follow an energy dependent two part linear fit in both electron and mass density. We have superimposed such a fit onto each plot in Figure <ref>, where the transition point k is fixed to be the same for each energy and set to minimise the norm of the residual through all the data. Imposing a constant k throughout will motivate computational efficiency outlined in the following section. One interpretation of this parameterisation is taking the inverse of HU to density shown in Figure <ref> and incorporating a dependency on energy.Generalising the piecewise linear fit motivated from the data in Figure <ref> may be written asμ̂(x,ξ) = ∑_i=1^N_ff_i⊙(α_i(ξ)x+β_i(ξ)),where N_f is the number of linear fits, f_i is a class indicator function for materials belonging to the i^th class, and x∈ℛ^N_vox is the physical parameter one wishes to fit to—electron density or mass density for example. The class indicator functions are defined asf_i =1ifk_i-1≤ x < k_i0otherwisek=1,…,N_f,where k_i is the `knee' between the i^th and (i+1)^th linear fits, with k_0=0. To ensure connected fits, we may also enforce the constraintsα_ik_i+β_i=α_i+1k_i+β_i+1.We also have k_N_f = 0 and β_1 = 0, which ensures that a vacuum has no attenuation. Following on from the connection to the decomposed inverse of calibration in Figure <ref>, this general model would indeed allow several breakpoints around different tissue classes as in <cit.>, but here we focus on two or three for computational simplicity.§.§.§ Special Cases and Connections to Existing Models We now give several interesting special cases and connections to existing models: * An example of (<ref>) may be used for parameterising the attenuation of biological tissues as a function of relative electron density, which was the singular case studied in <cit.>, and allows the fitting in Figure <ref>. For this we have N_f=2, x=ρ_e and k=1.0064, which may be written asμ̂(ρ_e,ξ) = f_1(ρ_e)⊙α_1(ξ)ρ_e+f_2(ρ_e)⊙ [α_2(ξ)ρ_e+β_2(ξ)].This is the instance we study, along with an equivalent fit for mass density.* The model above may also include highly dense materials, such as titanium metallic implants with a mass density of ρ=4.5 g/cm^3. To include this, a second knee point may be inserted between bone and metal, and one would have N_f=3. We also study this in our numerical experiment with hip implants. * Another interesting special case with (<ref>) is when all b_i are set to 0, x=ρ, N_f=2, α_1(ξ) and α_2(ξ) are mass attenuation coefficients for water and bone, and the continuity constraints are ignored. This yields (<ref>), with a density dependent threshold, and is equivalent to the formulation in <cit.> without smoothing. The consequence of ignoring the continuity in (<ref>) will be a step at the `knee' points. * In a similar manner to the water–bone model utilised in <cit.>, the IMPACT model in <cit.> may also be expressed as a special case of our general fitting model in (<ref>). We note that adopting the same notation, the model may be written asμ̂(x,ξ) = ∑_i=1^N_ff_i⊙([α_i/ξ^3+γ_if_KN(ξ)]x+β_i/ξ^3+δ_if_KN(ξ)),where α_i, β_i, γ_i and δ_i are the coefficients or photoelectric and Compton linear fits, and x is a monoenergetic equivalent attenuation. A key difference between (<ref>) and (<ref>) is the energy dependence of coefficients and number of parameters. Whilst our model has 2N_fN_ξ-1 free parameters, (<ref>) has 4N_f, which is likely to be less, thereby offering increased model flexibility but, as we will see in the next section, at no additional computational cost. * We could also use (<ref>) to predict the reconstruction of a monoenergetic equivalent image—in DECT this is known as `quasi monoenergetic' reconstruction <cit.>. This is possible since the imposition of a constant `knee' position across energies allows any monoenergetic attenuation to be predicted using an equivalent piecewise linear function. The relation to the IMPACT model in this caseis that the energy dependence is fitted from the data instead of implied from the photoelectric–Compton model in (<ref>). * Although not evaluated in this article, due to the same shape in HU to proton stopping power in <cit.> as relative electron density, we suggest our model should be very applicable for proton interaction modelling also.§.§ Direct Quantitative Density Reconstruction The Polyquant model introduced in Section <ref> describes the forward mapping from physical quantity to attenuation. We now show how this may be combined with the statistical CT measurement model in (<ref>), to allow direct statistical inference of mass or electron density. Combining (<ref>) with (<ref>) results in the relation∑_j=1^N_ξb_i(ξ_j)exp(-[Φμ(ξ_j)]_i) = ∑_j=1^N_ξb_i(ξ_j)exp(-[Φμ̂(x,ξ_j)]_i) i=1,...,N_ray.If we introduce a function ψ(·,·) to simplify notation asψ_i(x,ξ) ≡ b_i(ξ)exp(-[Φμ̂(x,ξ)]_i) i=1,...,N_ray,we can write the negative log-likelihood (NLL) for the Poisson model asNLL(x;y) = ∑_i=1^N_ray∑_j=1^N_ξψ_i(x,ξ_j)+s_i - y_ilog(∑_j=1^N_ξψ_i(x,ξ_j)+s_i),where we note that this function is non-convex as with similar CT NLL functions in <cit.>.Reconstruction of the quantitative density map can be performed by finding an x that minimises (<ref>). We will look at gradient descent methods, for which we require an expression for the derivative of NLL. If we simplify notation with the followingd(x) = y⊘(∑_j=1^N_ξψ(x,ξ_j)+s) - 1,where ⊘ represents component-wise division. An expression for the derivative is then∂NLL(x;y)/∂x≈∑_i=1^N_ff_i(x)⊙Φ^T[∑_j=1^N_ξα_i(ξ_j)ψ(x,ξ_j)⊙d(x)],where Φ^T represents a transpose of the system matrix or `back-projection', and ⊙ is component-wise multiplication. We have shown this derivative as an approximation “≈” in testament to the fact that there are discontinuities in the gradient at the `knee' positions, where the gradient is not defined. Although in <cit.>, we invoked the logistic function to mitigate this effect, we have found better empirical performance by instead using (<ref>) directly.Before we proceed, we note that (<ref>) only has a single backprojection operation Φ^T per linear fit, so the number of these operations is independent of the number of energies N_ξ. This is only possible due to constant `knee' positions k for each energy. Calculating Φμ̂(x,ξ) is also independent on N_ξ, and can be evaluated with 2N_f-1 applications of Φ, where one fewer is a consequence of the assumption that β_1=0. For example, applying the method for electron density reconstruction of tissues using (<ref>), one would expect a computation cost of three forward- and two backprojections.Although one may obtain a quantitative density reconstruction through maximum likelihood estimation, by iteratively minimising (<ref>) through gradient descent with (<ref>), incorporation of prior regularisation will typically improve statistical performance, especially as the noise increases or when few measurements are taken. Incorporating regularisation now gives the penalised log-likelihood or maximum a posteriori estimate asx̂ = _x∈𝒞NLL(x;y) + λ R(x),where R(·) is some regularisation function, and 𝒞 is a set of box constraints on x so that 0≤ x_i≤ζi=1,...,N_vox., where ζ is the maximum allowable density value, and the constraint set ensures non-negative density values.The choice of R(·) in (<ref>) will vary on the imaging application, but some possibilities are generalized Gaussian Markov random field (GGMRF) <cit.>, total variation (TV) <cit.> or wavelet sparsity <cit.>. Without loss of generality, we opt in our experimental section for TV, since it promotes piecewise flat images, which we expect from homogeneous slabs of tissue.§.§.§ Algorithm Design Although there are many approaches for evaluating (<ref>), we will detail here the algorithm we have used in our experiments, which we give in Algorithm <ref>. We note that Algorithm <ref> is an instance of the iPiano <cit.>, although we have made some slight changes, where we chose this method due to its analysis with non-convex objective functions as we have in (<ref>). The parameter δ is the step size, which scales with a factor of L_0, which we define asL_0 = Φ^T[(∑_j=1^N_ξα_1(ξ_j)b(ξ_j))⊙Φ1]_∞,where ·_∞ is the maximum norm of the vector. L_0 represents the maximum in the diagonal of the Hessian of (<ref>) at the point 0, and may be considered as a crude estimate of the global Lipschitz constant, L. We note that this step size is likely to be very conservative in practice. The parameter γ sets the `inertia' of the method, where we used γ=0.8 as resulted in the fastest performance in <cit.>. Finally, 𝐩𝐫𝐨𝐱_δ R is the proximity operator defined as𝐩𝐫𝐨𝐱_δ R(z) = _ρ∈𝒞1/2z-ρ_2^2 + δλ R(ρ).In our experimental section, we use the TV <cit.> as a regularisation function R, in which case (<ref>) may be evaluated as in <cit.>.To illustrate the convergence properties of our method, and investigate its robustness to more aggressive step sizes, we applied Algorithm <ref> to the chest data detailed in Section <ref>, and plotted the evolution of the NLL through iterates in Figure <ref>, for different step size multiplication factors.It follows that using δ is indeed very pessimistic, and one observes convergence in this case even with 128δ, where convergence is achieved around 100 iterations. For factors larger than 128, we have found the method fails to converge, and it can be seen than the objective does dot decrease monotonically for more aggressive step sizes. This analysis suggests that the local Lipschitz constant of the gradient through iterations is normally significantly less than the global maximum, or indeed the approximation with L_0. Adaptive step size schemes could exploit this fact, such as the backtracking line-search in <cit.>, but one should be aware that evaluating the cost function in (<ref>) will be more than half the cost of calculating a new gradient, so each backtrack evaluation will increase the iteration cost significantly.§ EXPERIMENTATION For our experiments, we investigate the model accuracy in Section <ref> followed by a numerical fan-beam CT quantitative reconstruction test in Section <ref>, and finally validation with real data in Section <ref>. §.§ Polyquant Model Evaluation To investigate the accuracy of our proposed attenuation model, and its comparison to the other parameterisations given in Section <ref>, we calculated the predicted linear attenuation coefficient for a number of representative tissues from ICRP 89 <cit.>: adipose tissue, muscle tissue, spongy tissue (upper femoral spongiosa), and hard bone.For the DECT model in (<ref>), we optimised the three scalar parameters K_1, K_2 and n for best fit to all materials in the ICRP 89—shown in Figure <ref>. We also fitted our piecewise linear Polyquant model in Section <ref> for both relative electron density and monoenergetic attenuation at 60 keV to the same data. Similarly, we evaluated the accuracy of the IMPACT parameterisation in <cit.> to monoenergetic attenuation, again using the same fitting data from ICRP 89. These are plotted along with water and bone attenuation models in Figure <ref> and the residual norms are tabulated in Table <ref>.An observation that can be made from the plots in Figure <ref> and results in Table <ref> are that both IMPACT and our Polyquant models are very accurate over all materials. Fitting from monoenergetic attenuation as in IMPACT or `Polyquant-tissue-μ_mono' is understandably more accurate than ρ_e, due to its closer similarity to other monoenergetic attenuations, and one will inevitably lose this accuracy if one later calibrates to ρ_e or ρ using the trend in Figure <ref>. We note that using our energy dependent fitting strategy in the case of monoenergetic attenuation consistently outperforms IMPACT, and this difference is an order of magnitude for the case of hard bone.Another feature of the numerical results in Table <ref> is that explicitly using the photoelectric–Compton relation in (<ref>) as in DECT is less accurate than fitting it to biological materials as in IMPACT. This highlights that the physical photoelectric–Compton model is not very accurate over a wide range of material types, but also that one should not expect the fitted parameters from IMPACT to necessarily be physically meaningful in terms of ρ_e and Z_eff.Finally, the water model is reasonably accurate for the soft tissues and spongy bone and not hard bone, and the bone model conversely so, which suggests the component-wise model in (<ref>) is sensible. However, even if one selected the best cases from bone or water, as is the essence of <cit.>, then the error would still be considerably higher than either IMPACT or our proposed approach.To demonstrate that these fitted parameters are not universal, we also ran the same models with the same fitted parameters on two synthetic plastic materials: Plexiglass®/acrylic (polymethyl methacrylate) and Teflon® (polytetrafluoroethylene). These are interesting materials since they have similar electron densities to muscle and bone respectively but significantly different attenuation. In order to extend these models to include metallic implants, we also looked at how well they may model the attenuation of solid Titanium, where both IMPACT and Polyquant included a second `knee' point to incorporate its attenuation also.From the plots in Figure <ref> and results in Table <ref>, it is apparent that our proposed model is not universal across material types. Indeed, a water model is the best performing upon the specific plastic case, and the DECT is the best universal model, though this is also to be expected given the data in Figure <ref> since it uses a two-dimensional parameterisation.For the metallic implants, we note how the DECT, water and bone models are very inaccurate. IMPACT also shows significant errors, despite fitting the photoelectric–Compton model directly to Titanium, which provides further evidence that the energy dependence model is not universal. On the other hand, by imposing no physical model, our Polyquant approaches are able to capture the attenuating profile of the implant material very closely.The implications from the model experiment is that although our model is able to fit very closely to both tissue materials and metallic implants, once these parameters are fitted, they are inaccurate in synthetic plastic materials. An explanation for this mismatch is due to the fact that the correlation between electron density and attenuation coefficient is significantly different with tissues and synthetic materials <cit.>. With this, although parameterising the mass attenuation coefficient with water and bone gives more consistent predictions across material types, if one wishes to map into electron density for radiation therapy applications, then one must also take care to fit to the appropriate class of materials. We finally note that our model is still able to account for plastic materials, and the `Polyquant-plastic-ρ_e' in Table <ref> is a result of fitting to a family of polymers—we usedTeflon®, Delrin®, Plexiglass®, polystyrene, LDPE (low-density polyethylene) and PMP (polymethylpentene)—and the resulting accuracy is notably very high. We also show the relationship to attenuation for these materials in the graphs in Figure <ref>, which confirms that plastics interestingly follow a similar piecewise trend of their own, though this clearly does not extend to Titanium. If one wishes to quantify the attenuation of a mixture of both synthetic and biological tissues, then it seems that no single energy parameterisation would be consistent. One may opt for DECT measurements, but at the cost of requiring two diverse spectral sources. §.§ Low Dose Numerical Reconstruction Test The data we used for our numerical reconstruction test were derived from the Adult Reference Computational Phantom <cit.>, which is a segmented image of defined density and chemical composition to represent real tissues. To investigate the ability to image metal implants, we inserted a pair of prosthetic hip joints (marked in green) with a solid titanium pin and shell. The slices through the chest and pelvis—which are the images we selected for testing—are shown in Figure <ref>. The resolution is 299×137. The system geometry used for testing was a flat detector fan-beam CT, which was modelled in the Monte-Carlo package Gate <cit.>. We included a focused lead strip collimator to mitigate scatter, a 0.6 mm thick CsI scintillator, a shaped source to put more photons into the centre of the specimen, and generated spectra using SpekCalc <cit.>: 6 mm aluminium filtration at 120 kVp. We simulated a total of 3×10^9 photons into 512 detection elements over 360 projections in 1^∘ increments, which represented a low dose acquisition. Although the spectrum used to generate the measurements was sampled at 1 keV increments, for reconstruction we used N_ξ=21, which was deemed a sufficient number in <cit.>, and corresponded to our sampling of mass attenuation coefficients.§.§.§ Reconstruction Methods Under Test Most of the methods we tested are iterative reconstructions, with some degree of polyenergetic modelling. For fairness, we used TV regularisation for each <cit.>, and selected the regularisation parameter giving the highest performance in each case — curves showing the influence of these parameters are shown in Figure <ref>. All iterative methods were run for 500 iterations, which resulted in empirical convergence in each case, after which the figure of merit did not changed by more than the reported precision.Details of the methods under test are: * FBP:Filtered backprojection (FBP) is a popular reconstruction algorithm, that `analytically' approximates the inverse of linearised CT projections <cit.>. In our implementation: we take the logarithm of raw measurements; correct for beam hardening with a water only polynomial fitting <cit.>; apply a ramp filter with Hann windowing, with cut-off frequency optimised to give maximum performance; and finally apply a single backprojection operation Φ^T. To map to density from attenuation coefficient, we use the calibration curve in Figure <ref> and detailed in <cit.>.We included FBP as a crude baseline and to indicate the level of noise in our system, but we expect it to perform significantly worse than other competitive approaches under test due to the low dose. * PWLS:Penalised weighted-least-squares (PWLS) approximates the CT model as linear by taking the logarithm of the raw measurements <cit.>, calibrating them to correct for beam-hardening artefacts for the polyenergetic source <cit.>, and includes a statistical weighting to approximate the Poisson noise in (<ref>). The objective function is thenμ_mono = _μ (Φμ-l)^TW(Φμ-l) + λ R(μ),where W is a diagonal statistical weighting matrix with entries w_ii=(y_i-s_i)^2/y_i, and l is the collection of linearised monoenergetic projections <cit.>. Converting μ_mono to electron density is then done through a nonlinear calibration according to <cit.>, as with the FBP. It should be noted that we are not actively modelling the metal implant in this case. * Poly-SIR:Polyenergetic statistical iterative reconstruction (Poly-SIR) is the segmented water–bone model from (<ref>) <cit.>. As this requires prior knowledge on material classes, we give it the oracle segmentation of the hard bone structures, and it treats everything else as water. In the pelvis case, we also pass the oracle segmentation of the metal implants, with corresponding mass attenuation coefficients. Since the physical model gives the mass density, we convert to relative electron density where appropriate using the curve shown in Figure <ref>. * IMPACT:We implement the IMPACT model of <cit.> as a special case of our generalised fitting in (<ref>). This allows us to use the same algorithm as presented in Section <ref> for its minimisation. We use a three component piecewise-linear fitting from monoenergetic attenuation at 60 keV to energy dependent attenuation, which accounts for metal implants as well as biological tissues. We then use the same post-processing calibration technique as with FBP and PWLS to convert to either mass or relative electron densities. * Polyquant:In our proposed model, we use the piece-wise linear fitting in Section <ref> and reconstruction strategy in Section <ref>. We use Algorithm <ref>, but with 10δ which from Figure <ref> is still rather conservative, but still exhibited a monotonic decrease in the objective function. For the pelvis case, we extend the model with a second `knee' point and linear section to include the attenuation of solid titanium according to the generalisation in (<ref>). We used separate fittings to electron and mass density shown in top and middle graphs in Figure <ref>, to demonstrate the ability to reconstruct directly into either quantity, where each mapping was a least-squares fitting to the materials in ICRP 89 <cit.>. §.§.§ Quality Assessment MetricThe metric for quantifying the accuracy of the various methods under test is the root-mean-squared-error (RMSE) in relative electron and mass density, calculated as√(1/N_vox∑_i=1^N_vox(ρ̂_i-ρ_i))^2,where ρ̂∈ℛ^N_vox is the estimated mass or electron density of a tested method, and ρ is the ground-truth. As an error, the lower the score represents a higher quantitative accuracy. §.§.§ Reconstruction ResultsThe results from our reconstruction test are illustrated in Figure <ref> and Table <ref>. In terms of both electron density and mass density accuracy, our proposed method is the best performing method under test. The lack in estimation accuracy in using the water–bone model with Poly-SIR is likely to be due to discrepancies in adipose and spongy bone tissues from water, which were illustrated in Figure <ref>. We note that Poly-SIR is second best performing in the pelvis case, but it was provided with oracle information on the implant and hard bone, which would be difficult to segment in practice from a preliminary reconstruction such as FBP due to its high noise. IMPACT is understandably very closely performing to our method in the chest case, as these performed similarly in the model test, but the advantage of fitting at each energy is clear in the case of the metal implant, where the IMPACT performance is similar to that of PWLS.The relationship between regularisation strength and quantitative performance of the iterative methods are shown in Figure <ref>. In both cases, this demonstrates that the Polyquant not only reaches the best performance, especially in the pelvis case, but its numerical accuracy is reasonably robust to the setting of this parameter. The difference in the location of the minima between methods may be accounted for by the different scales in image parameter — for example IMPACT uses the monoenergetic attenuation at 60 keV, whereas Polyquant uses the physical density, and these have a relative difference in intensities. Another implication from these curves is that although the Polyquant chest image in Figure <ref> exhibits a higher level of intratissue variation than IMPACT, this will be mitigated by increasing the regularisation parameter, which can increase by over 40% whilst maintaining the best numerical performance.Another interesting feature of the results is the difference between electron and mass density scores. Although it may seem counter-intuitive that the Poly-SIR for example would have a higher error before calibration than after, this is due to the RMSE being absolute and not relative. Since from Figures <ref> and <ref> the mass densities of materials are on average higher than relative electron density, this will account for the difference. §.§ Cone-Beam CT Validation To validate our method, we also tested its ability to perform quantitative reconstruction from real X-ray cone-beam CT (CBCT) measurements. For this, we acquired a scan with a Varian® TrueBeam™ On-Board Imager® of a CIRS STEEV head phantom. The phantom consists of synthetic resins to mimic the attenuating properties of human tissues, allowing quantitative assessment of relative electron density accuracy. There was also a metal structure in the centre of the phantom, consisting of the plug section from a PTW PinPoint® ionisation chamber, allowing us to investigate the mitigation of metal induced artefacts.§.§.§ CBCT Data ProcessingOur CBCT acquisition consists of 499 projections at a 100 kVp tube potential and 20 mA current for 15 mSec on each, which were the default settings for a head acquisition. Compared to the numerical test, the relative X-ray flux was roughly 2.5× higher, which coupled with the smaller specimen volume and larger number of projections, implies that this test was at a significantly higher dose. The raw measurements were all pre-corrected for detector responses, and the effect of bow-tie shifting with gantry rotation, with default TrueBeam™ corrections to give y in (<ref>). The scatter estimate s in (<ref>) was also taken from the Varian system's default scatter correction. For testing the FBP—realised with the Feldkamp–Davis–Kress method <cit.>—and PWLS, we calculated the linearised projection vector l in (<ref>).In the case of the fully polyenergetic reconstruction methods Poly-SIR, IMPACT and Polyquant, we require explicit knowledge of the X-ray spectrum b_i(ξ) in (<ref>). Due to the variable thickness of Aluminium in the bow-tie filter, this will be spatially varying, and we calculated it analytically from the appropriate spectrum in the system's calibration parameters, and the spectral response of the various metal filters and scintillator in the beam path. As in our other experimental sections, we discretised the spectrum into 21 energies, and used the same parameters as were fitted in Section <ref> for the ICRP biological tissues, but supplemented by the mass attenuation of the metal implant according to information provided by the manufacturer.§.§.§ CBCT ReconstructionFor reconstruction, we mapped into a resolution of 512× 512 × 144, and used each method as detailed in Section <ref>. We ran each iterative method for 500 iterations. For the regularisation parameter, we heuristically used 0.5λ_pelvis, where λ_pelvis were the same TV regularisation parameters from the digital pelvis experiment, and gave good empirical performance on the CBCT data. Finally, for the bone and metal segmentations required for Poly-SIR, we obtained these through applying thresholds on the FBP and PWLS separately. To illustrate the critical role of this step, we have shown both images in Figures <ref> and <ref>. Reconstructions of the 83rd slice from the CBCT data are shown in Figure <ref>, along with a region from the 91st slice containing a higher mass of metal in Figure <ref>. Although the FBP appears to suffer strongly from the presence of the metal structure, most of the iterative methods mitigate its effect considerably, with the PWLS showing a more pronounce dark region in the soft tissue. The Poly-SIR based on a segmentation from the FBP does suffer from its streaking as shown in Figure <ref>. Although a better performance is achieved through segmenting from the PWLS as in Figure <ref>, this will have a considerably higher total computational cost; the Poly-SIR region in Figure <ref> is based on this PWLS initialisation. The preservation of bone structure appears to be similar between all the iterative methods. The Polyquant method appears to have the smoothest regions around the metal implant, whilst maintaining the same level of structure in the bone and objects surrounding the phantom. To evaluate the quantitative accuracy of each reconstruction, we isolated regions of interest (ROI) in the 83rd slice located in soft tissue and spongy bone — these are shown in Figure <ref>. The RMSE of these regions are calculated relative to the electron density of the soft tissue and bone equivalent resins, and shown in Table <ref>. From Table <ref>, we note that our proposed model is the most accurate method under test by at least 16% over other approaches in all regions.§ DISCUSSION Aspects of our method we have not evaluated in this study are its robustness, and the practicality of its computational implementation, though these are both worth discussing. In the first case, we have noted that due to the discontinuous gradient from the piecewise linear fitting, there is no theoretical guarantees for convergence. Two approaches we have adopted in implementations are using a smooth function for f(x) in (<ref>) such as a generalised logistic, and connecting the two linear fits with a quadratic function for some interval around the `knee'. However both options increase the computational cost of the gradient term in (<ref>) considerably, and we have found neither give any empirical advantage in convergence or accuracy over just using the non-smooth version. Since it is common in CT reconstruction to use empirically well performing methods that have no convergence guarantees such as pre-computed curvatures in separable paraboloid surrogates <cit.> or ordered subsets <cit.>, we believe this is reasonable. We also highlight, that from Figure <ref>, our method does empirically appear to converge even for very aggressive step size multiplication factors.Another potential robustness issue with any polyenergetic model are partial volume effects, where a discretised voxel contains different classes of material. Due to its linearity, given that the two materials belong to the same fit interval, such as lung and fat or muscle and bone as in Figure <ref>, then our model would correctly estimate the attenuation from these materials. If a voxel contains materials from different intervals however, such as fat and muscle or metal and bone, then our model will overestimate the attenuation, according to the trends in Figure <ref>. Although we have not evaluated the degree of this effect, we note it will be in common with other approaches <cit.>, and may be mitigated by increasing the resolution of reconstruction.The performance of iterative methods under a low dose acquisition is an important consideration for reducing the amount of ionising radiation delivered to the patient. By performing our numerical test at a low dose, we demonstrated the method is relatively robust in this setting. Between the iterative methods tested, PWLS is expected to suffer the most from a low dose, due to its approximation to the noise model, and linearisation of the projections that become unstable for very low photon fluxes <cit.>, and has been shown to perform worse at low doses than `pre-log' methods such as Polyquant, Poly-SIR and IMPACT <cit.>. Between these three method, whether there exists a relative change of interplay between partial volume effect photon flux and accuracy is yet to be determined, although Polyquant is likely to benefit from its superior attenuation modelling.In terms of computational cost, our method is comparable to other full gradient iterative methods. Given that the bottleneck is in calculating the forward- and back-projection operators Φ and Φ^T, then we note that each gradient step in (<ref>) may be implemented with 3 forward and 2 backward evaluations, which represents a 2.5× larger cost than PWLS, which is the same as our implementation of IMPACT. Comparatively, Poly-SIR has a 2× larger cost than PWLS, which given the consistent accuracy advantage of our model is unlikely to be worth this slight speed advantage. For faster implementation, we suggest that algorithmic acceleration such as ordered subsets <cit.> and using parallel hardware <cit.> are likely suitable, and will be investigated in future work.§ CONCLUSIONS We have introduced a general quantitative attenuation model, which allows direct inference of mass or electron density from raw CT measurements with a single polyenergetic source. Not only have we demonstrated this allows more accurate modelling than explicit physical models such as water–bone or photoelectric–Compton, but have shown how it may be exploited in a flexible reconstruction algorithm that allows accurate quantitative medical imaging, even with metal implants and real CBCT data. As with other single source methods, we have highlighted its inconsistency between synthetic and biological tissues, but this may not be of relevance for medical imaging, in which opting for the more general DECT model is significantly less accurate over materials of interest. Since our method has a similar computational cost to other iterative approaches, but offers markedly higher accuracy, it offers both a practical and beneficial approach to CT imaging.§ ACKNOWLEDGEMENTSThe authors would like to sincerely thank Dr Adam Wang from Varian Medical Systems, for preprocessing our cone-beam CT data and providing invaluable information and advice about the imaging system. This work was supported by the Maxwell Advanced Technology Fund, EPSRC DTP studentship funds and ERC project: C-SENSE (ERC-ADG-2015-694888). MD is also supported by a Royal Society Wolfson Research Merit Award. apalike
http://arxiv.org/abs/1706.08743v2
{ "authors": [ "Jonathan H. Mason", "Alessandro Perelli", "William H. Nailon", "Mike E. Davies" ], "categories": [ "physics.med-ph" ], "primary_category": "physics.med-ph", "published": "20170627091854", "title": "Polyquant CT: direct electron and mass density reconstruction from a single polyenergetic source" }
Department of Electronic and Communication Engineering, Changsha University, Changsha, Hunan 410022, People's Republic of China Key Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education, and Department of Physics, Hunan Normal University, Changsha 410081, People's Republic of [email protected] Department of Mathematics and Computing Science, Changsha University, Changsha, Hunan 410022, People's Republic of ChinaDepartment of Electronic and Communication Engineering, Changsha University, Changsha, Hunan 410022, People's Republic of [email protected] Department of Electronic and Communication Engineering, Changsha University, Changsha, Hunan 410022, People's Republic of ChinaIn this paper, we have investigated the preservation of quantum Fisher information (QFI) of a single-qubit system coupled to a common zero temperature reservoir through the addition of noninteracting qubits. The results show that, the QFI is completely protected in both Markovian and non-Markovian regimes by increasing the number of additional qubits. Besides, the phenomena of QFI display monotonic decay or non-monotonic with revival oscillations depending on the number of additional qubits N-1 in a common dissipative reservoir. If N < N_c( a critical number depending on the reservoirs parameters), the behavior of QFI with monotonic decay occurs. However, if N ≥ N_c, QFI exhibits non-monotonic behavior with revival oscillations. Moreover, we extend this model to investigate the effect of additional qubits N-1 and the initial conditions of the system on the geometric phase (GP). It is found that, the robustness of GP against the dissipative reservoir has been demonstrated by increasing gradually the number of additional qubits. Besides, the GP is sensitive to the initial parameter θ, and possesses symmetric in a range regime [0,2π].73.63.Nm, 03.67.Hx, 03.65.Ud, 85.35.BePreservation of quantum Fisher information and geometric phase of a single qubit system in a dissipative reservoir through the addition of qubits Ke Zeng December 30, 2023 =================================================================================================================================================§ INTRODUCTIONQuantum Fisher information (QFI) which extends the classical Fisher Information plays a significant role in the fields of quantum metrology <cit.>.QFI characterizes the sensitivity of the state with respect to changes in a parameter <cit.>.According to the quantum estimation theory, the ultimate achievable parameter estimation precision is characterized by the QFI through the Cramér-Rao inequality <cit.>. For an estimation parameter with a larger QFI value, the accuracy is more clearly achieved. Moreover, QFI may serve as a new resource in quantum information tasks to witness entanglement detection <cit.>, non-Markovianity characterization <cit.>, and uncertainty relations <cit.> and so on.However, any realistic quantum system of interest is unavoidably disturbed by surrounding environments which not only result in the loss of quantum coherence of interest systems but also degrease of parameter estimation precision. Hence, how to preserve and enhance the QFI becomes a key problem to be solved. In recent years, great attention has been paid to protecting and improving the QFI. Different protocols and strategies have been proposed and realized in the quantum metrology.For instance,Tan et al. investigated the improvement precision of parameter estimation in atom interferometer suffering from dephasing noise by using dynamical decoupling pulses <cit.>. Zhang et al. proposed an alternative scheme to enhance the precision of quantum estimation by introducing the quantum correlations of the initial states <cit.>. By employing external field driven, Abdel-Khalek studied the QFI for a two-level atom system in a dissipative environment <cit.>. Inspired by the classical driving, Li et al. investigated parameter estimation precision of a non-Markovian dissipative two-state system <cit.>. Taking non-Markovian effect into consideration, a qubit system in non-Markovian environment becomes a good candidate for implementation of quantum optics schemes and information with high precision <cit.>.Zheng et al. analyzed the parameter precision of optimal quantum estimation of a dissipative qubit by applying quantum feedback technology <cit.>. Other schemes such as, for protecting the QFI to enhance parameter estimation precision are made use of weak measurement and quantum measurement reversal <cit.>. In this paper, we are going on to preserve and improve QFI in a dissipative environment using addition noninteracting qubits. The consideration of the additional qubits to suppress decoherence was first observed in <cit.>, where the bipartite entanglement is protecting by the addition of a third qubit. Later, many more additional qubits are introduced to fight against decoherence <cit.>. For example, Behzadi et al. investigated quantum coherence preservation of a two-level system in a dissipative reservoir via the addition of qubits into the reservoir <cit.>.On the other hand, the phase of quantum states which is a fundamental property of quantum coherence, can be measured by interfering the wave function with another coherent wave function so that one can discern whether or not the system has undergone an evolution. The phase that depends only on the geometry of the path traversed by the system during its adiabatic evolution, is first named the geometric phase (GP) by Berry <cit.>. GP has observable consequences in a wide range of systems <cit.>. It can not only be used to encode information on systems <cit.>, but also treated as a quantifier to detect quantum criticality in the systems of spin chain. <cit.>. In resent years, the topic of the GP has recently been studied intensively and is drawing increasing attention. On one hand, it has been generalized from the cases of adiabatic, cyclic and unitary evolutions to the cases of non-adiabatic, non-cyclic and nonunitary evolutions <cit.>. On the other hand, it is aslo generalized from the cases of pure-state ornondegenerate density operators undergoing unitary evolutions <cit.> to the cases of mixed-state or degenerate density operators in nonunitary evolutions <cit.>. Besides, another direction of the generalization is to study the composite system GP, especially one considers the relation of GP between the composite system and its subsystems. In general, the GP of the composite system in nonlocal unitary evolution is always not equal to the sum of the GP of its subsystems <cit.>, except for the case where the composite system with initial separable state undergoes local unitary evolution <cit.>. Among these extensions of the GP, one based on mixed-state density operators undergoing nonunitary evolutions has been extensively studied in various contexts. The purpose of this paper is going on to examine the protection of QFI and GP from decoherence environment. To do so, we will consider an N-qubit system embedded in a common reservoir which consists of harmonic oscillators. The model under investigation is simple and has been drawing increasing attention in various contexts due to its with an exact analytical solution under some assumption <cit.>. In this paper, our main motivation is mainly concentrated on how the additional qubits affect QFI of phase estimation and GP. Specifically we compare the behaviors of QFI in both weak and strong coupling regimes through the addition of noninteracting qubits. The results show that, the dynamics of QFI displays monotonic decay or non-monotonic with revival oscillations depending on the number of additional qubits N-1 in a common dissipative reservoir. If N < N_c, the behavior of QFI with monotonic decay occurs, the larger the number of additional qubits, the more slowly the QFI decays and otherwise, QFI exhibits non-monotonic behavior with revival oscillations, with the number of additional qubitsincreasing, the QFI is protected well. Furthermore, we calculate the GP for a nonunitary evolving qubit according to the formula of Ref. <cit.>, and investigate the effect of the addition qubits and initial conditions of the system on the geometric phase (GP). It is found that, the GP is immune to the decoherence of environment in both Markovian and non-Markovian regimes when the number of additional qubits is large enough, but it is sensitive to the initial parameter θ, and possesses symmetric in a range regime [0,2π].This paper is organized as follows: In Sec.2, we introduce the model based on an N-qubit system immersed in a common zero-temperature thermal reservoir and its analytical solution. In Sec.3, the protection of QFI for a single qubit system coupled to a common reservoir through the addition of qubits is investigated.In Sec.4, the influence of additional qubits as well as initial condition on GP is studied. Finally, we give the conclusionin Sec.5. § THE MODEL AND ITS SOLUTIONWe consider a single-qubit system along with N-1 similar non-interacting qubits located in a common dissipative reservoir. For simplicity, we suppose that N non-interacting qubits are independently coupled to a common zero temperature thermal reservoir which is consistent of harmonic oscillators. The Hamiltonian of the whole system is given by <cit.>,H=ω_0∑_i=1^Nσ_i^+σ_i^-+∑_kω_k b_k^†b_k+∑_i=1^N∑_k(B^†σ_i^-+Bσ_i^+),where B=∑_k g_kb_k, g_k are the coupling strength between the i^th and field mode k in the reservoir,b_k^† and b_k are the mode k creation and annihilation operators, respectively. ω_k donatesthe k_^th mode with frequency in the reservoir, σ_i^+=|1⟩_i⟨0| and σ_i^-=|0⟩_i⟨1| are the raising and lowering operators of the i^th qubit system with the same transition frequency ω_0, respectively.For simplicity, we consider the situation of no more than one excitation in the whole system, and the reservoir is initially in the vacuum state|0⟩_E. Supposing the initial state of the N-qubit system has the form |ψ(0)⟩=C_0(0)|0⟩_s+∑_i=1^NC_i(0)|i⟩_s, where |0⟩_s donates all of the qubits are in the ground state |0⟩, and |i⟩_s donates the i^th qubit state is in the excited state |1⟩ and the rest ones are in the ground state |0⟩. Then the total system state |Φ(0)⟩=|ψ(0)⟩⊗|0⟩_E evolves after time t > 0 into the state |Φ(t)⟩=[C_0(t)|0⟩_s+∑_i=1^NC_i(t)|i⟩_s]⊗|0⟩_E+∑_j=1^NC_j(t)|0⟩_s⊗|1_j⟩_E. The equality |C_0(t)|^2+∑_i=1^N|C_i(t)|^2+∑_j=1^N|C_j(t)|^2 holds due to the normalization condition. Similarly, we denote |0⟩_E being the vacuum state of the reservoir and |1_j⟩_E the state of it with only one excitation in the j^th field mode. By solving the Schrodinger equation in the interaction picture, the reduced density matrix of the i^th qubit is obtained by partial tracing out both the environmental reservoir's degrees of freedom and the other qubitsρ_i(t)=( [ ρ_11^i(0)|C_i(t)|^2 ρ_10^i(0)C_i(t); ρ_01^i(0)C_i^∗(t) ρ_00^i(0)+ρ_11^i(0)(1-|C_i(t)|^2); ]),where ρ_11^i=|C_i(0)|^2, ρ_00^i=|C_0(0)|^2, and ρ_10^i=ρ_01^i∗=C_i(0)C_0^∗(0). The function C_i(t)satisfiesd/dtC_i(t)=-∫_0^∞∫_-∞^+∞J(ω)e^i(ω_0-ω_k)(t-τ)∑_j=1^NC_j(τ)dω dτ,where J(ω) is the spectral density function, characterizing the reservoir spectrum. To solve the function C_i(t), we assuming the reservoir as a Lorantzian spectral densityJ(ω)=1/2πγ_0λ^2/(ω_0-ω)^2+λ^2,the parameter λ defines the spectral width of the coupling which is associated with the reservoir correction time by the relation τ_E=λ^-1, and the parameter γ is related to the relaxation time scale by the relation τ_R=γ_0^-1. In this sense, the boundary between Markovian regimes and non-Markovian regimes can be approximately specified by the reservoir's parameters γ_0 and λ <cit.>. Usually, γ_0 < λ /2 (γ_0/λ < 1/2 ) represents the weak system-reservoir coupling regime where the behavior of the system is the Markovian dynamic, and γ_0 > λ /2 (γ_0/λ > 1/2 ) corresponds to the strong system-reservoir coupling regime wherethe non-Markovian dynamic occurs.With the above equations in mind, using Laplace transform and its inverse transform, the corresponding probability amplitude C_i(t) is easily obtainedC_i(t)=N-1/N+e^-λ t/2/N[cosh(D t/2)+λ/Dsinh(D t/2)],with D=√(λ_^2-2Nγ_0λ). An interesting result for the above equation at the asymptotical limit t→∞ reduces to N-1/N. On the other hand, as N→∞, C_i(t→∞, N→∞) tends to 1.In the following, we are going to investigate how the addition of qubits influences on QFI and GF of a single qubit system in a common dissipative reservoir. § QFI PRESERVATION IN A COMMON DISSIPATIVE RESERVOIR THROUGH THE ADDITIONAL QUBITSBefore investigate the QFI dynamics, we firstly briefly review the main aspects of QFI whichis determined by the well-known Cramr-Rao inequality <cit.>(Δθ)^2≥1/n QFI_θwhere (Δθ)^2 represents the mean square error of the parameter θ , and n denotes the number of repeated independent experiments. QFI_θ is characterized the sensitivity of the state with respect to changes in a parameter, and it is defined as <cit.>QFI_θ=Tr[ρ(θ)L^2],where ρ(θ) is the density matrix of the system,and L is the symmetric logarithmic derivation given by∂ρ(θ)/∂θ=Lρ(θ)+ρ(θ)L/2.Making use of the spectrum decomposition ρ(θ)=∑_iλ_i|ω_i⟩⟨ω_i|, analytical expression of QFI can be divided into two partsQFI_θ=∑_i(∂_θλ_i)^2/λ_i+2∑_i,i'(λ_i-λ_i')^2/λ_i+λ_i'|⟨ω_i|∂_θω_i'⟩|^2,The purpose of the parameter estimate for a quantum system is to find the values of unknown parameter as precisely as possible, one should optimize input initial state to maximize the QFI. Based on the present model in this paper, the QFI based parameter is assumed to be induced by a phase gate U(θ)=|0⟩⟨ 0|+exp(iθ)|1⟩⟨ 1| acting on a system of interest. Then the interest system with additional qubits in a common reservoir evolve with time before the measurement performed on. In order to estimate the unknown parameter θ as precisely as possible, the optimal input state of the interest system may be chosen as |Ψ_opt⟩=1/√(2)(|0⟩ +|1⟩), which maximizes the QFI_θ of the output state U(θ)|Ψ_opt⟩ <cit.>. Taking the output state U(θ)|Ψ_opt⟩ and the Eq. (2), the reduced density matrix ρ(θ) of the interest system is given byρ_(θ)=( [ |C_1(t)|^21/√(2)exp(iθ)C_i(t); 1/√(2)exp(-iθ)C_i^∗(t) 1-|C_i(t)|^2;]). According to Eq. (<ref>), the analytic expression of QFI_θ of phase estimation for a single qubit system in a reservoir in the presence of additional qubits is obtainedQFI_θ={N-1/N+e^-λ t/2/N[cosh(D t/2)+λ/Dsinh(D t/2)]}^2,form above equation, an interesting result for QFI_θ occurs at the asymptotical limit t→∞, as followsQFI_θ={N-1/N}^2,which is only dependent of the total number of qubits N in a common reservoir. Especially, as N→∞, Eq. (<ref>) reduces to the maximal initial QFI_θ=1. In the following, we will explore the influence of the number of additional qubitson the QFI, and compare the precision ofparameter estimation in both weak and strong coupling regimes. Fig. 1 shows QFI dynamics of a single-qubit system as a function of λ t for different the number of additional qubits in the weak coupling regime γ_0 =0.05 λ.As N=1(in the absent of additional qubits case), namely, a two-level (qubit) system is coupling to a zero-temperature bosonic reservoir, the dynamics of QFI had been studied in Refs. <cit.>, and found the dynamics of QFI monotonically decays to zero without revivals.However, with the number of additional qubitsincreasing, (e.g. N=2, N=4, and N=8), the dynamics of QFI shows a low speed of decay as the system evolves. Notice that the QFI does not decay to zero during the time evolution in the present of addition qubits, but it goes down to a stead value which is dependent on the number of additional qubits N-1 (or the total number of qubits N). One can easily determine from Eq. (<ref>) at the limited t→∞ which reduces to (N-1)^2/N^2.In particularly, N→∞, the maximal QFI approaches to 1. In contrast to QFI in the weak coupling regime, Fig. 2 shows QFI dynamics in the strong coupling regime γ_0 =10 λ. It is observed that, for without addition qubits N=1, the dynamics of QFI non-monotonically decays, but experiences damped revival oscillations. This result is consistent with the previous analysis in Refs. <cit.>. As the number of additional qubitsgoes up, QFI dynamics displays more quickly damped revival oscillations and stays at some steady value. The larger the number of additional qubits is, the smaller the amplitude for QFI oscillates. In particularly, N→∞, the maximal QFI approaches to 1. By comparing QFI in the weak coupling regime (Fig.1) with the strong coupling regime (Fig. 2), we find that, adding qubits into environmental reservoir can dramatically protect the QFI so as to enhance the precision of parameter estimation. This result indicates QFI of dissipation can be suppressed by increasing number of additional qubits. The more the number of additional qubits are, the much more slowly the QFI decays, the higher the estimation precision.It is worth pointing out that the system's dynamics of a two-level system coupling to N bosonic reservoirs has been reported by Man et al <cit.>, and they found that the system's dynamics was strongly depended on reservoirs' number N. The system dynamics remains Markovian when 1≤ N < N_c (N_c=[λ_n/2γ_n +1]∈ Z), but N ≥ N_c corresponds to non-Markvoian dynamics. However, different from the model by Man proposed, we here consider N qubits are immersed in a common reservoir, each of qubit has γ_n < λ_n /2,the system's dynamics varies with N(the total qubits number). To get a better understanding of the effects of the number of additional qubitson the QFI,Fig.3 shows QFI dynamics in the weak coupling regime γ_0 =0.05 λ. One can find that, on one hand, QFI displays monotonically decays without revivals for 1≤ N < N_c(N_c=11). On the other hand, QFI dynamics displays damped revival oscillations for N ≥ 11 even in this weak coupling regime. This physical origincan understand as followings. When N identical qubits are all independently coupling to a common dissipative reservoir in the weak coupling regime γ_n < λ_n /2 (γ_n/λ_n < 1/2 ), then ∑_n=1^Nγ_n/λ_n>1/2 which is identical to the strong system-reservoir coupling regime. Therefore, the QFI dynamics is non-Markovian. This result indicates N_c can be regarded as a critical number of qubits no matter how strong the couplings between the individual qubit and the reservoir. § THE GP OF A SINGLE-QUBIT SYSTEM IN THE PRESENCE OF ADDITIONAL QUBITSIn this section,we study quantitatively the GP of a single qubit system along with N-1 qubits coupled to a common dissipative reservoir in terms of additional qubits.We apply the approach proposed in Ref. <cit.>, where the authors developed a kinematic approach to GP for open quantum systems in non-unitary evolution induced by dissipative environments. GP is defined asΦ_GP = Arg{∑_i√(λ_i(0)λ_i(t))⟨ω_i(0)|ω_i(t)⟩× exp[-∫_0^t⟨ω_i(τ)|d/dτω_i(τ)⟩ dτ]}where λ_i(t) and ω_i(t) are the eigenvalues and eigenvectors of the matrix ρ(t) given by Eq. (<ref>), and Arg denotes the argument of the complex number.To demonstrate the influence of the addition qubitsand the initial conditions of the system on the GP, we assume that the initial state of the interest system is chosen as |Ψ⟩=cosθ/2|0⟩ +sinθ/2|1⟩. Substituting Eq. (<ref>) and Eq. (<ref>) into Eq. (<ref>), then the GP of the qubit acquired after a quasicyclic t = 2π/ω_0 can be calculated asΦ_GP=∫_0^t4(|C_i(t)|^2cos^2θ/2-λ)^2/|C_i(t)|^2sin^2θ+4(|C_i(t)|^2cos^2θ/2-λ)^2dt,in which λ=1/2[1-√(|C_i(t)|^2sin^2θ+(2|C_i(t)|^2cos^2θ/2-1)^2)].First, we concentrate on the influence of the addition number of qubits on GP, we display the GP as a function of N for fixed the initial state's parameter θ=π/4 in the Markovian regime γ_0 =0.05 λ and non-Markovian regime γ-0 =10 λ in Fig.4, respectively. It is obvious to see the values of GPs firstly raise monotonically with the increase of the addition number of qubits and finally reach their maximal values in both Markovian regime and non-Markovian regime. The saturation maximal values is only related to the initial state's parameter θ. One can easily determine from Eq. (<ref>) which reduces to Φ_GP^0=π(1+cosθ) when the addition number of qubits is large enough. It is worthy noting that, the saturation maximal value Φ_GP^0 is just the GP acquired by a single-qubit in the unitary dynamics. This is consistent with the result obtained in Ref. <cit.>. However, Such monotonic increasing behaviors of GPs in both Markovian regime and non-Markovian regime are dramatically affected by the additional qubits, in particular, the number of additional qubits is small. However, when the number of additional qubits is large enough, the GPs are immune to the decoherence of environment in both Markovian and non-Markovian regimes. From the above results, we may make a control on the robustness of GP of the system against the dissipative environment by increasing the number of qubits in a dissipative reservoir. Apart from the effects of the addition of qubits, the initial state's parameter of the system also play an important role in the evolution of GP. In Fig. 5, we plot the GP as a function of the initial state's parameter θ for N=2 in Markovian regime γ_0 =0.05 λ and non-Markovian regime γ_0 =10 λ, respectively. One can see that, the GP first deceases and then increases, vanishes for θ=π.They are symmetric in a period [0,2π]. This results reflect the GPs are sensitive to the initial state's parameter.§ CONCLUSION Though some recent works are based on the environment effects on the QFI dynamics of the open system in the weak coupling regime <cit.>, there are still some interesting results that need further discussion and in-depth study. In this paper, we study the QFI for a single-qubit system coupled to a common dissipative reservoirs through the addition qubits into reservoir. We mainly focus on the influence of the additional noninteracting qubits on the QFI dynamics. The results show that, the QFI is completely protected in both Markovian and non-Markovian regimes by increasing the number of additional qubits. Besides, the phenomena of QFI display monotonic decay or non-monotonic with revival oscillations depending on the number of additional qubits N in a common dissipative reservoir. If N < N_c, the behavior of QFI with monotonic decay occurs. However, if N ≥ N_c, QFI exhibits non-monotonic behavior with revival oscillations for whatever system-reservoir coupling strengths. Moreover, we extend this model to investigate the effect of the addition of qubits and the initial conditions of the system on the GP. The GP for a nonunitary evolving qubit is calculated according to the formula of Ref. <cit.>, we find that, the GP is unaffected by the dissipative reservoir with the number of additional qubits increasing, regardless of whether the qubit system is in Markovian or non-Markovian regimes. However, the GP is sensitive to the initial parameter θ, and possesses symmetric in a range regime [0,2π].This research is supported by the Start-up Funds for Talent Introduction and Scientific Research of Changsha University 2015 (Grant No.SF1504), the Scientific Research Project of Hunan Province Department of Education (Grant Nos.16C0134 and 17C0133), the Natural Science Foundation of Hunan Province (Grant No.2017JJ3346), Key Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education (QSQC1403) and the Project of Science and Technology Plan of Changsha (ZD1601071) 51 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty1Fisher R A, 1929Proc. Cambridge Phil. Soc.22 900 2Cramer H, 1946 Mathematical Methods of Statistics(Princeton University Press, Princeton, NJ) 2aHelstrom, C.W.: Quantum Detection and Estimation Theory. Academic Press, New York (1976) 2bHolevo, A.S.: Probabilistic and Statistical Aspects of Quantum Theory. North-Holland, Amsterdam (1982) 3Li N and Luo S L, 2013Phys. Rev. A 88014301 4Lu X M, Wang X G and Sun C P, 2010 Phys. Rev. A 82042103 5Luo S L, 2003 Phys. Rev. Lett. 91180403 6Watanabe Y, Sagawa T and Ueda M, 2011 Phys. Rev. A 84042121 Tan1Q. S. Tan, Y. X. Huang, L. M. Kuang, and X. G. Wang, 2014 Phys. Rev. A 89, 063604 Tan2Q. S. Tan, Y. X. Huang, X. L. Yin, L. M. Kuang, and X. G. Wang, 2013 Phys. Rev. A 87, 032102 ZhangY. M. Zhang, X. W. Li, W. Yang, and G. R. Jin, Phys. Rev. A 88, 043832 (2013) Li1Li Y. L., Xiao X., and Yao Y., 2015 Phys. Rev. A 91, 052105 Abdel-KhalekS. Abdel-Khalek, 2014 Annals of Phys.. 351,0952-959 Berrada1K. Berrada, 2013 Phys. Rev. A 88, 035806 Berrada2K. Berrada, 2015 J. Opt. Soc. Am. B 32, 571 ZhengQ. Zheng, L. Ge, Y. Yao, and Q. J. Zhi, 2015 Phys. Rev. A 91, 033805 HeZ. He, and C. M. Yao,2014 Chin. Phys. B 23, 110601 XiaoX. Xiao, Y. Yao, W. J. Zhong, Y. L. Li, and Y. M. Xie,2016 Phys. Rev. A 93, 012307 An N. Ba An, J. Kim and K. Kim, Phys. Rev. A 84, 022329 (2011). An1 N. Ba An, Phys. Lett. A 337, 2520 (2013). FloresM. M. Flores, E. A. Galapon, Ann Phys. 354, 2130 (2015).BehzadiNaghi Behzadi, Bahram Ahansaz, Esfandyar Faizi, arXiv:1607.06507v2 [quant-ph](2017) Behzadi1N. Behzadi, B. Ahansaz, A. Ektesabi, E. Faizi, arXiv:1702.037932v2 [quant-ph](2017) Berry M. V. Berry, Proc. R. Soc. London A 329, 45 (1984). TomitaA. Tomita and R. Y. Chiao, Phys. Rev. Lett. 57, 937 (1986). LeekP. J. Leek, J. M. Fink, A. Blais, R. Bianchetti, M. Göppl, J. M. Gambetta, D. I. Schuster, L. Frunzio, R. J. Schoelkopf, and A. Wallraff, Science 318, 1889 (2007). ChenH. Chen, M. Hu, J. Chen, and J. Du, Phys. Rev. A 80, 054101 (2009). JonesJ. A. Jones, V. Vedral, A. Ekert, and G. Castagnoli, Nature (London) 403, 869 (2000). YiX. X. Yi and W. Wang, Phys. Rev. A 75, 032103 (2007). Zhang2X. Zhang, A. Zhang, and L. Li, Phys. Lett. A 376, 2090 (2012)AharonovAharonov Y. and Anandan J., Phys. Rev. Lett., 58 1593 (1987) SamuelSamuel J. and Bhandari R., Phys. Rev. Lett., 60 2339 (1988) TongD. M. Tong, E. Sjoqvist, L. C. Kwek, and C. H. Oh, Phys. Rev. Lett. 93, 080405 (2004). SjoqvistSjoqvist E., Pati A. K., Ekert A., Anandan J. S., Ericsson M., Oi D. K. L. and Vedral V., Phys. Rev.Lett., 85 2845 (2000) UhlmannUhlmann A., Rep. Math. Phys., 24 229 (1986)SinghSingh K., Tong D. M., Basu K., Chen J. L. and Du J. F., Phys. Rev. A, 67 032106 (2003) EricssonEricsson M. et al., Phys. Rev. A, 67 020101 (2003) PeixotoPeixoto de Faria J. G. et al., Europhys. Lett., 62 (2003)782. Tong1D. M. Tong, E. Sjöqvist, L. C. Kwek, C. H. Oh, and M. Ericsson, Phys. Rev. A 68, 022106 (2003). WilliamsonM. S. Williamson and V. Vedral, Phys. Rev. A 76, 032115 (2007) NiuC. W. Niu, G. F. Xu, Longjiang Liu, L. Kang, D. M. Tong and L. C. Kwek, Phys. Rev. A 81, 012116 (2010) Yi1X. X. Yi, D. M. Tong, L. C. Wang, L. C. Kwek, and C. H. Oh, Phys. Rev. A 73, 052103 JuanJuan-Juan Chen, Jun-Hong An, Qing-Jun Tong, Hong-Gang Luo, and C. H. Oh, Phys. Rev. A 81, 022120 Li2Li J G, Zou J and Shao B, 2010Phys. Rev. A 81 062124 BreuerBreuer H P and Petruccione F, 2007 The Theory of Open Quantum Systes, (Oxford University Press, Oxford) SunSun Z, Ma J, Lu X and Wang X G, 2010 Phys. Rev. A 82 022306 DornerDorner U, Demkowicz R, Smith B J, Lundeen J S, Wasilewski W, Banaszek K and Walmsley A, 2009 Phys. Rev. Lett. 102 040403 ManZhong-Xiao Man, Nguyen Ba An,and Yun-Jie Xia, Phys. Rev. A 90, 062104 (2014)
http://arxiv.org/abs/1706.08634v1
{ "authors": [ "Youneng Guo", "Qinglong Tian", "Yunfei Mo", "K Zeng" ], "categories": [ "quant-ph", "cs.IT", "math.IT" ], "primary_category": "quant-ph", "published": "20170627010531", "title": "Preservation of quantum Fisher information and geometric phase of a single qubit system in a dissipative reservoir through the addition of qubits" }
[email protected] Center for Macroscopic Quantum Control, Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea School of Computational Sciences, Korea Institute for Advanced Study, Seoul 02455, Korea IMDARC, Department of Mathematical Sciences, Seoul National University, Seoul 08826, Korea Department of Mathematics and Research Institute for Basic Sciences, Kyung Hee University, Seoul 02447, Korea School of Computational Sciences, Korea Institute for Advanced Study, Seoul 02455, Korea Center for Macroscopic Quantum Control, Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea We propose an extension of the quantum entropy power inequality for finite dimensional quantum systems, and prove a conditional quantum entropy power inequality by using the majorization relation as well as the concavity of entropic functions also given by Audenaert, Datta, and Ozols [J. Math. Phys. 57, 052202 (2016)]. Here, we make particular use of the fact that a specific local measurement after a partial swap operation (or partial swap quantum channel) acting only on finite dimensional bipartite subsystems does not affect the majorization relation for the conditional output states when a separable ancillary subsystem is involved. We expect our conditional quantum entropy power inequality to be useful, and applicable in bounding and analyzing several capacity problems for quantum channels. 03.67.-a,03.67.Hk,89.70.-aConditional quantum entropy power inequality for d-level quantum systems Hyunseok Jeong December 30, 2023 ======================================================================== § INTRODUCTIONThe channel capacity of a channel (or communication system) in information theory is defined as the maximum rate at which information can be reliably transmitted through the given channel <cit.>. If we choose a communication system such as a quantum mechanical system or quantum channel, which models a quantum state transforming with its ancillary system (or environment), and it is mathematically given by a completely positive, trace-preserving (CPT) map, we can naturally classify quantum, classical and private capacities over the quantum channel according to their respective input information sources  <cit.>. In general, determining the channel capacity of a quantum channel is not a simple problem in quantum information theory <cit.>. In particular, it is almost impossible to obtain a channel capacity when quantum entanglement is imposed <cit.>, and most channel capacities are nonadditive <cit.>. However, one way to bound the capacity of any channel is to make use of the notion of the entropy power inequality (EPI), originally proposed by Shannon <cit.>.In quantum scenarios, EPIs have played a major role in bounding channel capacity for thermal noisy channels (see, for example, Refs. <cit.>). Furthermore, the concept of EPI is related to a fundamental mathematical isoperimetric inequality in classical as well as quantum regimes <cit.>.First, we briefly review Shannon's statement of the entropy power inequality. The differential entropy for a (continuous) random variable X of values x∈R^d with probability density function p_X is defined as <cit.>H(X): =-∫_R^d p_X(x)log p_X(x)d^dx,which is the relevant information measure for the random variable X, and plays a central role in classical information theory. If the random variable X takes a Gaussian distribution G_X, we can obtain a variance 12π ee^2H(X)/d=ν(G_X), which is usually called the entropy power or energy of the input random variable X. For convenience, we omit the factor 12π e in the definition of the entropy power. Now, suppose that two independent random variables X_1 and X_2 on R^d are combined via the scaled addition rule or the (scaled) convolution operation (*_t); then, for a given output signal X_1*_tX_2 at the end of the channel, we can find the following classical entropy power inequality (cEPI) <cit.>:ν(X_1*_tX_2)≥ tν(X_1)+(1-t)ν(X_2),where X_1*_tX_2=√(t)X_1+√(1-t)X_2 is the output signal under the convolution operation with a mixing parameter t∈[0,1]. This expression can be restated as the following inequalities:exp(2H(Y)/d) ≥ texp(2H(X_1)/d)+(1-t)exp(2H(X_2)/d),orH(Y) ≥ tH(X_1)+(1-t)H(X_2),where Y:=X_1*_tX_2. Details of its proof can be found in several references  (see <cit.>).Recently, a quantum (Gaussian) version of the entropy power inequality, namely the quantum entropy power inequality (qEPI), has been proved <cit.> and applied to several information-processing tasks <cit.>. The qEPI is a quantum analog (but not a direct generalization) of the cEPI equipped with a τ-transmissivity beamsplitter, simply τ-BS of τ∈[0,1], and whose input sources are D-mode bosonic Gaussian quantum states ρ_X_ℓ∈Sp(2D,R), ∀ℓ∈{1,2} on the symplectic space. If we define an entropic function as ν_κ(ρ_X):=e^κ S(ρ_X), where S(ϱ)=-ϱlogϱ is the von Neumann entropy of a quantum state ϱ, then we haveν_κ(ρ_X_1⊞_τρ_X_2)≥τν_κ(ρ_X_1)+(1-τ)ν_κ(ρ_X_2),where ρ_X_1⊞_τρ_X_2∈Sp(2D,R) is an output signal of the τ-BS known as the (Gaussian) quantum addition rule, and the constant κ=1D in the Gaussian case. Generally, the beamsplitter transformation with a parameter τ can be interpreted as a CPT map 𝐆_τ over two bosonic modes ρ_X_ℓ such that𝐆_τ:ρ_X_1⊗ρ_X_2↦ρ_X_1⊞_τρ_X_2=_X_2V_τ(ρ_X_1⊗ρ_X_2)V_τ^†,where the beamsplitting operation is explicitly given by V_τ:=√(τ)+ι√(1-τ)σ_x <cit.> including the complex number ι=√(-1). We note thatis an identity matrix and σ_x is the Pauli x-matrix, where the τ-BS operation generally interpolates these two operators. Now, if we define ρ_Y:=ρ_X_1⊞_τρ_X_2, then we know that qEPI, Eq. (<ref>), has an entropic form of S(ρ_Y)≥τ S(ρ_X_1)+(1-τ)S(ρ_X_2) for two independent inputs ρ_X_ℓ and for the τ-BS. By employing the quantum de Bruijn's inequality and the entropy-scaling property known as `Gaussification,' we can obtain the entropic inequality <cit.>—the entropy of a channel's mixed output is always increased.A qEPI for d-dimensional quantum states (qudits) has also been proposed <cit.>, and is given by the form of Eq. (<ref>), but it is generally true when the constant κ is restricted to 0≤κ≤1(log d)^2 where d≃2D. In the proof, the symmetric property and the concavity of the entropic function ν_κ(ρ) in the region of κ via the majorization relation on a quantum state ρ was used. Furthermore, it is important to note that independent input quantum states for the quantum channel are represented by ρ_X_ℓ∈D(C^d) with ℓ∈{1,2}, where D(C^d):={ρ∈B(C^d):ρ=1,ρ=ρ^†≥0} is a class of density matrices on a bounded linear operator B(C^d) (over the d-dimensional Hilbert space), and those mixing operations with the parameter τ are given by a partial swap as follows. We now review the partial swap operation (p-Swap) denoted by ⊞_τ, which is also known as the qudit addition rule <cit.>. For any τ∈[0,1] and any density matrices ρ_X_ℓ∈D(C^d), we can find an output of the quantum channel via the p-Swap asρ_X_1⊞_τρ_X_2 =𝐍_τ(ρ_X_1⊗ρ_X_2) =_X_2[U_τ(ρ_X_1⊗ρ_X_2)U_τ^†] =τρ_X_1+(1-τ)ρ_X_2-ι√(τ(1-τ))[ρ_X_1,ρ_X_2],where [A,B]=AB-BA is the commutator, the resulting state is also a d-level quantum state, and U_τ:=√(τ)+ι√(1-τ)W, where W is the swap operator such that Wρ_ABW^†=ρ_BA on two d-level quantum systems. We call the map 𝐍_τ(·) the partial swap channel on d-level quantum systems.In this study, we prove a conditional version of the qEPI (CqEPI) for arbitrary d-level quantum states in Sec. <ref> through a conditional majorization relation (see Sec. <ref>). We discuss our results and outline our future plans in Sec. <ref>. § CONDITIONAL EIGENVALUES AND MAJORIZATION RELATION FOR QUANTUM STATES It was conjectured that, for any quantum states ρ_X_1X_2E and a mixing parameter τ∈[0,1],S(ρ_X_1⊞_τρ_X_2|ρ_E)≥τ S(ρ_X_1|ρ_E)+(1-τ)S(ρ_X_2|ρ_E),where the beamsplitter operation with τ acts on any two quantum systems <cit.>. However, for any Gaussian product states—especially having the form ρ_X_1E_1⊗ρ_X_2E_2, Koenig proved that S(ρ_Y|ρ_E)≥τ S(ρ_X_1|ρ_E_1)+(1-τ)S(ρ_X_2|ρ_E_2), where ρ_Y=ρ_X_1⊞_τρ_X_2 and ρ_E=ρ_E_1⊗ρ_E_2 is the (separable) ancillary system. Koenig referred to this inequality as the conditional quantum EPI or CqEPI in the Gaussian regime. In his proof, Koenig exploits the quantum version of the “scaling property for the conditional entropy” (Lemma 6.2 in Ref. <cit.>) and the “conditional de Bruijn identity” (Theorem 7.3 also in Ref. <cit.>) in the Gaussian regime. Recently, a similar result for the Gaussian CqEPI is introduced by de Palma and Trevisan <cit.>. In their papers, they have used quantum conditional entropy notation, S(ρ_A|ρ_B)=S(A|B)_ρ_AB:=S(AB)_ρ_AB-S(B)_ρ_B, which means the von Neumann entropy of system A when system B is conditioned. However, in this paper, we use a different notation of a set of conditional eigenvalues such as λ(ρ_A|_B), given by any quantum measurement performed on the subsystem B, so as to show another version of the CqEPI based on local measurements, which is not the same as the CqEPI with respect to the quantum conditional entropy.Our approach is related to the quantum discord, which represents another type of quantum correlation different from entanglement <cit.>. The Gaussian CqEPI comes from the fact that, if any quantum state ρ_X_1X_2E has a conditionally independent form, i.e., ρ_X_1X_2E=ρ_X_1E_1⊗ρ_X_2E_2, then it can be decomposed as a direct sum of tensor products <cit.> such thatρ_X_1X_2E=⊕_jp_jρ_X_1E_1^j⊗ρ_X_2E_2^j,and the von Neumann entropy of state ρ_X_1X_2E satisfies S(⊕_jp_jρ_j)=∑_jp_jS(ρ_j)+H({p_j}_j), where H(·) is the Shannon entropy <cit.>. Instead of Gaussian product states, we give a similar proof of the qEPI for any d-level product states ρ_X_1E_1⊗ρ_X_2E_2 conditioned through a quantum measurement on the environments E_1 and E_2 respectively. For d-level CqEPI cases, we use the majorization relation for eigenvalues of ρ_X_ℓ|_E_ℓ∈D(C^d) for all ℓ=1,2, instead of the quantum conditional entropy. Before the main proof, we briefly review the majorization condition for quantum states. Let us denote 𝐦=(m_1,m_2,…,m_d) and 𝐧=(n_1,n_2,…,n_d)∈R^d with its components arranged in decreasing order of m_1^↓≥ m_2^↓≥⋯≥ m_d^↓ and n_1^↓≥ n_2^↓≥⋯≥ n_d^↓. Then, for any 𝐦 and 𝐧∈R^d, 𝐦 is considered to be majorized by 𝐧 and we write 𝐦≺𝐧 if, ∀ k={1,…,d}, ∑_j=1^km_j^↓≤∑_j=1^kn_j^↓ with equality at k=d. In addition, a function f:R^d→R is called Schur concave, if f(𝐦)≥ f(𝐧) whenever 𝐦≺𝐧 <cit.>. The majorization technique explained above is also obvious in the density operator formalism of the quantum regime <cit.>.By using the definition of the majorization condition above, and the partial swap channel in Eq. (<ref>), it was proved in Refs. <cit.> that, for any quantum states ρ_X_1,ρ_X_2∈D(C^d), we can obtain λ(ρ_X_1⊞_τρ_X_2)≺τλ(ρ_X_1)+(1-τ)λ(ρ_X_2),where λ(ρ) denotes a set of the eigenvalues for a quantum state ρ, and ⊞_τ the p-Swap operation with a mixing parameter τ∈[0,1]. This point is crucial. Our main goal in this study is to extend Eq. (<ref>) to the (measurement-based) conditional version for d-level quantum states.§ CQEPI: MAIN RESULTS We now suggest that the p-Swap and its identity (Theorem 1.1 in Ref. <cit.>) can be extended to a conditional version of the entropy power inequality. Here, we make use of the fact that any local measurements (LMs) via the partial swap operation do not change the majorization condition when the separable environments E_1 and E_2 are measured locally (see Fig. <ref> and Lemma <ref> below). Note that, if ρ_E≠ρ_E_1⊗ρ_E_2, the CqEPI is still open as in Eq. (<ref>). First, we briefly review the output states of the quantum channel through the partial swap operation. Let ρ_X_1X_2E_1E_2:=ρ_X_1E_1⊗ρ_X_2E_2 be the total quantum state. Then we haveρ_YE_1E_2 =(𝐍_τ⊗_E_1E_2)(ρ_X_1X_2E_1E_2) =_X_2(U_τ⊗_E_1E_2)(ρ_X_1E_1⊗ρ_X_2E_2)(U_τ^†⊗_E_1E_2^†),and also remember ρ_Y=ρ_X_1⊞_τρ_X_2=𝐍_τ(ρ_X_1⊗ρ_X_2)=τρ_X_1+(1-τ)ρ_X_2-ι√(τ(1-τ))[ρ_X_1,ρ_X_2]. We now introduce a new set of eigenvalues of ρ_Y induced by ρ_YE_1E_2 after local measurements on the separable environment ρ_E_1⊗ρ_E_2, and we will use the notation such as λ(ρ_Y|_E_1E_2). Notice that the notation ρ_X|_E does not mean the conditional quantum state introduced in Ref. <cit.>, but (as mentioned above) it is just a quantum state ρ_X after a local measurement performed on the subsystem E for ρ_XE. For example, if we choose a set of local measurement described by {M_j}_E on the subsystem ρ_E (1≤∀ j≤ d_E), then we defineρ_X|_E(j)=1/p_j_E((_X⊗ M_j)ρ_XE(_X⊗ M_j^†)),where p_j=(M_j^†M_jρ_E) is the normalization factor. Thus, we can naturally define the set of conditional eigenvalues after a specific local measurement on E as follow: (∀ρ_XE)λ(ρ_X|_E(j)):=λ(_E[(_X⊗ M_j)ρ_XE(_X⊗ M_j^†)]/p_j). As a subsidiary example, let us consider ρ_YE_1E_2=(𝐍_τ⊗_E_1E_2)(ρ_X_1E_1⊗ρ_X_2E_2) and a situation in which local projective measurements are involved. Let {ψ_jψ_j_E_1}_j=1^d_E_1 and {ϕ_kϕ_k_E_2}_k=1^d_E_2 be the local measurements on the environmental subsystems ρ_E_1 and ρ_E_2 respectively. Finally, to find the conditional eigenvalues, we define the final states (conditional outputs) after local measurements on the subsystems E_1 and E_2 as σ_X_1^(j)=1/q_j^(1)_E_1⟨ψ_j|ρ_X_1E_1|ψ_j⟩_E_1 and σ_X_2^(k)=1/q_k^(2)_E_2⟨ϕ_k|ρ_X_2E_2|ϕ_k⟩_E_2 where q_j^(1)=_E_1⟨ψ_j|ρ_E_1|ψ_j⟩_E_1 and q_k^(2)=_E_2⟨ϕ_k|ρ_E_2|ϕ_k⟩_E_2. Then σ_Y^(j,k) :=1/p_j,k^(1,2)_E_1E_2⟨ψ_j,ϕ_k|ρ_YE_1E_2|ψ_j,ϕ_k⟩_E_1E_2=(𝐍_τ⊗_E_1E_2)(1/q_j^(1)_E_1⟨ψ_j|ρ_X_1E_1|ψ_j⟩_E_1⊗1/q_k^(2)_E_2⟨ϕ_k|ρ_X_2E_2|ϕ_k⟩_E_2) =𝐍_τ(σ_X_1^(j)⊗σ_X_2^(k)) =σ_X_1^(j)⊞_τσ_X_2^(k). Note that p_j,k^(1,2):=_E_1E_2⟨ψ_j,ϕ_k|ρ_E_1E_2|ψ_j,ϕ_k⟩_E_1E_2=_E_1⟨ψ_j|ρ_E_1|ψ_j⟩_E_1·_E_2⟨ϕ_k|ρ_E_2|ϕ_k⟩_E_2=q_j^(1)· q_k^(2), since ρ_E_1E_2=ρ_E_1⊗ρ_E_2 is separable.By using Theorem 1.1 in Ref. <cit.>, we can naturally obtain thatλ(σ_X_1^(j)⊞_τσ_X_2^(k))≺τλ(σ_X_1^(j))+(1-τ)λ(σ_X_2^(k)).This relation directly implies that specific local measurements after the p-Swap operation do not affect the majorization relation for the conditional output states. Without loss of generality, we can generalize the (local) projective measurement to a (local) general measurement formalism. For the main proof, we need the following definition, which is a natural extension of Eq. (<ref>) (see also Fig. <ref>).[Output state of p-Swap operation]For any quantum states in the form ρ_X_1X_2E_1E_2:=ρ_X_1E_1⊗ρ_X_2E_2, the output state through the partial swap operation with τ∈[0,1] on subsystems X_1 and X_2 is given byρ_YE_1E_2=τρ_X_1E_1+(1-τ)ρ_X_2E_2-ι√(τ(1-τ))[ρ_X_1E_1,ρ_X_2E_2].By using Definition <ref> and Eq. (<ref>), we can derive the following crucial lemma, namely the `conditional majorization relation' for our product d-level quantum states. First, we define ρ_X_1|_E_1(j):=1/q_j^(1)_E_1(M_j^(1)ρ_X_1E_1M_j^†(1)) and ρ_X_2|_E_2(k):=1/q_k^(2)_E_2(M_k^(2)ρ_X_2E_2M_k^†(2)), i.e., the outcome states after local measurements given by {M_j^(1)}_E_1 and {M_k^(2)}_E_2, where q_j^(1)=(M_j^†(1)M_j^(1)ρ_E_1) and q_k^(2)=(M_k^†(2)M_k^(2)ρ_E_2) on the environmental subsystems ρ_E_1 and ρ_E_2 respectively. Note that, for any j, the measurement elements satisfy ∑_j=1^dM_j^† M_j=.[Conditional majorization relation]For any pair of density matrices ρ_X_1E_1,ρ_X_2E_2∈D(C^d× d_E_ℓ), any τ∈[0,1] and for all j,k, if we take local measurements as {M_j^(1)}_E_1 and {M_k^(2)}_E_2 on the subsystems ρ_E_1 and ρ_E_2 respectively, then we haveρ_Y|_E_1(j)E_2(k)=ρ_X_1|_E_1(j)⊞_τρ_X_2|_E_2(k).This fact directly implies that, for each measurement outcome j and k,λ(ρ_Y|_E_1(j)E_2(k))≺τλ(ρ_X_1|_E_1(j))+(1-τ)λ(ρ_X_2|_E_2(k)). Here, the environmental subsystem is given by ρ_E_1E_2=ρ_E_1⊗ρ_E_2, i.e., the separable state. Proof. It is sufficient to prove that, for each j and k, ρ_Y|_E_1(j)E_2(k)=ρ_X_1|_E_1(j)⊞_τρ_X_2|_E_2(k). That is, ρ_Y|_E_1(j)E_2(k) :=1/p_j,k^(1,2)_E_1E_2((M_j^(1)⊗ M_k^(2))ρ_YE_1E_2(M_j^†(1)⊗ M_k^†(2))) =(𝐍_τ⊗_E_1E_2)(1/q_j^(1)_E_1(M_j^(1)ρ_X_1E_1M_j^†(1))⊗1/q_k^(2)_E_2(M_k^(2)ρ_X_2E_2M_k^†(2))) =𝐍_τ(ρ_X_1|_E_1(j)⊗ρ_X_2|_E_2(k)) =ρ_X_1|_E_1(j)⊞_τρ_X_2|_E_2(k),where we again use the fact that the probability p_j,k^(1,2):=_E_1E_2((M_j^†(1)M_j^(1)⊗ M_k^†(2)M_k^(2))ρ_E_1E_2)=(M_j^†(1)M_j^(1)ρ_E_1)·(M_k^†(2)M_k^(2)ρ_E_2)=q_j^(1)· q_k^(2) for the (separable) environmental system ρ_E=ρ_E_1⊗ρ_E_2. Second parts (i.e., Eq. (<ref>)) are directly given by Theorem 11 in Ref. <cit.> or Theorem 1.1 in Ref. <cit.>. This completes the proof. ▪In the proof of Lemma <ref>, for any Schur concave function f,we can define its function values as f(ρ_Y|_E_1E_2) = min_{M_j^(1)},{M_k^(2)}∑_j,kq_j^(1)q_k^(2)f(ρ_Y|_E_1(j)E_2(k)),f(ρ_X_1|_E_1) = min_{M_j^(1)}∑_jq_j^(1)f(ρ_X_1|_E_1(j)),andf(ρ_X_2|_E_2) = min_{M_k^(2)}∑_kq_k^(2)f(ρ_X_2|_E_2(k)).Then by exploiting Lemma <ref>, we can prove the following theorem, which is our main result.[Conditional qudit EPI (CqEPI)]Let ρ_X_1E_1 and ρ_X_2E_2 be any discrete d× d_E_ℓ-level quantum states with a separable environment ρ_E_1⊗ρ_E_2 and ℓ∈{1,2}. For any concave and symmetric function ν_κ with a range of 0≤κ≤1(log d)^2, and for any τ∈[0,1], we haveν_κ(ρ_Y|_E_1E_2)≥τν_κ(ρ_X_1|_E_1)+(1-τ)ν_κ(ρ_X_2|_E_2).Proof. For each measurement outcome j and k, let ρ_X_1'|_E_1,ρ_X_2'|_E_2∈D(C^d) be diagonal states whose entries are the eigenvalues of ρ_X_1|_E_1 and ρ_X_2|_E_2, respectively, arranged in decreasing order. Since λ(ρ_X_1'|_E_1)=λ(ρ_X_1|_E_1) and λ(ρ_X_2'|_E_2)=λ(ρ_X_2|_E_2), we then have, from Eq. (<ref>),λ(ρ_Y|_E_1E_2)≺τλ(ρ'_X_1|_E_1)+(1-τ)λ(ρ'_X_2|_E_2) =λ(τρ'_X_1|_E_1+(1-τ)ρ'_X_2|_E_2).For any entropic function ν_κ(·) that is symmetric and concave in terms of eigenvalues of density matrices, we haveν_κ(ρ_X_1|_E_1⊞_τρ_X_2|_E_2)≥ν_κ(τρ'_X_1|_E_1+(1-τ)ρ'_X_2|_E_2) ≥τν_κ(ρ_X_1'|_E_1)+(1-τ)ν_κ(ρ_X_2'|_E_2) =τν_κ(ρ_X_1|_E_1)+(1-τ)ν_κ(ρ_X_2|_E_2),where the first inequality follows from the Schur concavity, the second inequality follows from the concavity of the entropic function, and the last equality follows from the symmetry.It follows that∑_j,k q_j^(1)q_k^(2)ν_κ(ρ_Y|_E_1(j)E_2(k)) ≥τ∑_jq_j^(1)ν_κ(ρ_X_1|_E_1(j)) +(1-τ)∑_kq_k^(2)ν_κ(ρ_X_2|_E_2(k))≥τν_κ(ρ_X_1|_E_1)+(1-τ)ν_κ(ρ_X_2|_E_2). This completes the proof. ▪§ DISCUSSION In summary, we have investigated a conditional entropy power inequality for d-dimensional quantum systems under the assumption that ancillary environmental subsystems are separable. In the proof, we considered a post-measurement property of quantum states through a local quantum operation (especially measurement) after p-Swap on d-level quantum states (i.e., qudits), and applied the well-known majorization technique to the (nonincreasing order of) eigenvalues of quantum states. Our construction CqEPI might be useful for characterizing entanglement-assisted capacity such as for a thermal (white) noise Gaussian channel, or in quantum superdense coding.We here discuss what is known about the entropy power inequality so far; a summary is provided in Table <ref>. Let us denote the entropy photon number inequality as EPnI and the continuous variable (CV) regime by ⋆. The CV EPnI proposed by Guha et al. with an average photon number is an important open question in quantum Shannon theory, although recently some progress has been reported on this topic <cit.>, but it is still unsolved in its original form. Furthermore, whether or not κ=1D on EPnI (⋆) is also an important conjecture. For the qEPI and CqEPI on qudit versions, the entropy power inequality is still unknown for the value κ=1d or κ>κ_1. Also for the qudit EPnI with κ=1d or κ>κ_2, the entropy power inequality is open—we do not have any strong evidence for its concave property.Finally, we have open questions of several different kinds. For example, some dual relations on EPI and qEPI (and also conditional versions of EPI) in the sense of a complementary quantum channel might be intriguing; moreover, certain inequalities of EPIs for different (or hybrid) inputs also seem to be important. It would also be interesting to study whether or not a (conditional) quantum entropy power inequality holds for quantum conditional states <cit.>, as well as for general multipartite quantum systems. § ACKNOWLEDGMENTSThis work was supported by the National Research Foundation of Korea (NRF) through a grant funded by the Korean government (MSIP) (Grant No. 2010-0018295) and by the KIST Institutional Program (Project No. 2E26680-16-P025). In addition, K.J. acknowledges financial support by the National Research Foundation of Korea (NRF) through a grant funded by the Korean government (Ministry of Science and ICT) (NRF-2017R1E1A1A03070510 & NRF-2017R1A5A1015626).S.L. acknowledges financial support by the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (NRF-2016R1A2B4014928). 41S48 C. E. Shannon,http://dx.doi.org/10.1002/j.1538-7305.1948.tb01338.x Bell Syst. Tech. J. 27, 379–423, 623–656 (1948).NC00 M. A. Nielsen and I. L. Chuang,Quantum Computation and Quantum Information (Cambridge University Press, 2000).W13 M. M. Wilde,Quantum Information Theory (Cambridge University Press, 2013).H06 A. S. Holevo, Proc. of the Int. Congress of Mathematicians, (Madrid, Spain, 2006).CEM+15 T. Cubitt, D. Elkouss, W. Matthews, M. Ozols, D. Pérez-Garca, and S. Strelchuk, http://dx.doi.org/10.1038/ncomms7739 Nat. Commun. 6, 6739 (2015).H09 M. B. Hastings, http://dx.doi.org/10.1038/nphys1224 Nat. Phys. 5, 255–257 (2009).SY08 G. Smith and J. Yard, http://dx.doi.org/10.1126/science.1162242 Science 321, 1812–1815 (2008).LWZG09 K. Li, A. Winter, X. Zou, and G. Guo, https://doi.org/10.1103/PhysRevLett.103.120501  103, 120501 (2009).KS13 R. König and G. Smith,http://dx.doi.org/10.1038/nphoton.2012.342 Nat. Photon. 7, 142–146 (2013).KS13+R. König and G. Smith,http://dx.doi.org/10.1103/PhysRevLett.110.040501  110, 040501 (2013).BW14B. R. Bardhan and M. M. Wilde,http://dx.doi.org/10.1103/PhysRevA.89.022302  89, 022302 (2014).DPR16 N. Datta, Y. Pautrat, and C. Rouzé,http://dx.doi.org/10.1063/1.4974223 J. Math. Phys. 58, 012205 (2017).L78 E. H. Lieb,http://dx.doi.org/10.1007/BF01940328 Commun. Math. Phys. 62, 35–41 (1978).DCT91 A. Dembo, T. Cover, and J. A. Thomas,http://dx.doi.org/10.1109/18.104312 IEEE Trans. Inf. Theory 37, 1501–1518 (1991).B75 W. Beckner,http://dx.doi.org/10.2307/1970980 Ann. Math. 102, 159–182 (1975).BL76 H. J. Brascamp and E. H. Lieb,http://dx.doi.org/10.1016/0001-8708(76)90184-5 Adv. Math. 102, 151–172 (1976).R11 O. Rioul,http://dx.doi.org/10.1109/TIT.2010.2090193 IEEE Trans. Inf. Theory 57, 33–55 (2011).S59 A. J. Stam,http://dx.doi.org/10.1016/S0019-9958(59)90348-1 Inf. Control 2, 101–112 (1959).B65N. M. Blachman,http://dx.doi.org/10.1109/TIT.1965.1053768 IEEE Trans. Inf. Theory 11, 267–271 (1965).KS14 R. König and G. Smith,http://dx.doi.org/10.1109/TIT.2014.2298436 IEEE Trans. Inf. Theory 60, 1536–1548 (2014).PMG14 G. de Palma, A. Mari, and V. Giovannetti, http://dx.doi.org/10.1038/nphoton.2014.252 Nat. Photon. 8, 958–964 (2014).GES08 S. Guha, B. I. Erkmen, and J. H. Shapiro,http://dx.doi.org/10.1109/ITA.2008.4601037 Information Theory and Applications Workshop, pp. 128–130 (2008);http://arxiv.org/abs/0710.5666arXiv:0710.5666.PMLG15 G. de Palma, A. Mari, S. Lloyd, and V. Giovannetti, http://dx.doi.org/10.1103/PhysRevA.91.032320  91, 032320 (2015).ADO16 K. Audenaert, N. Datta, and M. Ozols, http://dx.doi.org/10.1063/1.4950785 J. Math. Phys. 57, 052202 (2016).BCRIf two input states are D-mode bosonic field quadratures with annihilation operators α̂_1,…,α̂_D and β̂_1,…,β̂_D, respectively, then we obtain the D-mode output quadrature as ∀ j∈{1,…,D}, γ̂_j=√(τ)α̂_j+ι√(1-τ)β̂_j.K15 R. Koenig, http://dx.doi.org/10.1063/1.4906925 J. Math. Phys. 56, 022201 (2015).PT17 G. de Palma and D. Trevisan, https://arxiv.org/abs/1706.00440 arXiv:1706.00440v2 (2017); https://link.springer.com/article/10.1007/s00220-017-3082-8 First online in Commun. Math. Phys. (2018).OZ01 H. Ollivier and W. H. Zurek,http://dx.doi.org/10.1103/PhysRevLett.88.017901  88, 017901 (2001).HV01 L. Henderson and V. Vedral,http://dx.doi.org/10.1088/0305-4470/34/35/315 J. Phys. A: Math. Gen. 34, 6899 (2001).DSC08 A. Datta, A. Shaji, and C. M. Caves,http://dx.doi.org/10.1103/PhysRevLett.100.050502  100, 050502 (2008).AD10 G. Adesso and A. Datta,http://dx.doi.org/10.1103/PhysRevLett.105.030501  105, 030501 (2010).MBC+12 K. Modi, A. Brodutch, H. Cable, T. Paterek, and V. Vedral,http://dx.doi.org/10.1103/RevModPhys.84.1655  84, 1655 (2012).GTA13 D. Girolami, T. Tufarelli, and G. Adesso,http://dx.doi.org/10.1103/PhysRevLett.110.240402  110, 240402 (2013).HJPW04 P. Hayden, R. Jozsa, D. Pets, and A. Winter,http://dx.doi.org/10.1007/s00220-004-1049-z Commun. Math. Phys. 246, 359–374 (2004).SSAThe product quantum state introduced in Refs. <cit.> also holds the strong sub-additivity with the equality.B97 R. Bhatia,Matrix analysis (Graduate Texts in Mathematics, Springer, New York, 1997).CLL16 E. A. Carlen, E. H. Lieb, and M. Loss, http://dx.doi.org/10.1063/1.4953638 J. Math. Phys. 57, 062203 (2016).LS13 M. S. Leifer and R. W. Spekkens, http://dx.doi.org/10.1103/PhysRevA.88.052130  88, 052130 (2013).GSG16 S. Guha, J. H. Shapiro, and R. García-Patrón Sanchez,http://dx.doi.org/10.1109/ISIT.2016.7541390 IEEE Int. Symp. on Information Theory (ISIT) 2016.PTG16 G. de Palma, D. Trevisan, and V. Giovannetti,http://dx.doi.org/10.1109/TIT.2016.2621748 IEEE Trans. Inf. Theory 63, 728–737 (2016). PTG17 G. de Palma, D. Trevisan, and V. Giovannetti,http://dx.doi.org/10.1103/PhysRevLett.118.160503  118, 160503 (2017).
http://arxiv.org/abs/1706.08742v3
{ "authors": [ "Kabgyun Jeong", "Soojoon Lee", "Hyunseok Jeong" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170627091807", "title": "Conditional quantum entropy power inequality for $d$-level quantum systems" }
= 0pt 1 1 0.05 0.95 0.5cm↓̣↑̆↓↑ j k r q k = 0.5cm= 0.0cm Brooklyn, NY 11201, USANew York, NY 10016, USAThe condensate density profile of trapped two-dimensional gas of photons in an optical microcavity, filled by adye solution,is analyzed taking into account a coordinate-dependent effective mass of cavity photons and photon-photon coupling parameter. The profiles for the densities of the superfluid and normal phases of trapped photons in the different regions of the system at the fixed temperature are analyzed. The radial dependencies of local mean-field phase transition temperature T_c^0 (r) and local Kosterlitz-Thouless transition temperature T_c (r) for trapped microcavity photons are obtained. The coordinate dependence of cavity photon effective mass and photon-photon coupling parameter is importantfor the mirrors of smaller radius with the high trapping frequency, which provides BEC and superfluidity for smaller critical number of photons at the same temperature.We discuss a possibility of an experimental study of the density profiles for the normal and superfluid componentsin the system under consideration. Key words: Photons in a microcavity; Bose-Einstein condensation of photons; superfluidity of photons. 03.75.Hh, 42.55.Mv, 67.85.Bc, 67.85.HjOn Bose-Einstein condensation and superfluidity of trapped photons with coordinate-dependent mass and interactions Oleg L. Berman^1,2, Roman Ya. Kezerashvili^1,2,and Yurii E. Lozovik^3,4 December 30, 2023 ==================================================================================================================§ INTRODUCTIONWhen a system of bosons is cooled to low temperatures, a substantial fraction of the particles spontaneously occupy the single lowest energy quantum state. This phenomenon is known as Bose-Einstein condensation (BEC) and its occurs in many-particle systems of bosons with masses m and temperature T when the de Broglie wavelength of the Bose particle exceeds the mean interparticle distance <cit.>. The most remarkable consequence of BEC is that there should be a temperature below which a finite fraction of all the bosons “condense” into the same one-particle state with macroscopic properties described by a single condensate wavefunction, promoting quantum physics to classical time- and length scales.Most recently, the observations at room temperature of the BEC of two-dimensional photon gas confined in an optical microcavity, formed by spherical mirrors and filled by adye solution,were reported <cit.>. The interaction between microcavity photons is achieved through the interaction of the photons with the non-linear media of a microcavity, filled by a dye solution. While the main contribution to the interaction in the experiment, reported in Ref. Klaers_Nature, is thermooptic, it is not a contact interaction.It is known that BEC for bosons can exist without particle-particle interactions <cit.> (see Ref. Pitaevskii for the details), but at least the interactions with the surrounding media are necessary to achieve thermodynamical equilibrium. For photon BEC it can be achieved by interaction with incoherent phonons <cit.>. The influence of interactions on condensate-number fluctuations in a BEC of microcavity photons was studied in Ref. Stoof.The kinetics of photon thermalization and condensation was analyzed in Refs. Kirton1,Kirton2,Kirton3. The kinetics of trapped photon gas in a microcavity, filled by adye solution,was studied, and, a crossover between driven-dissipative system laser dynamics and a thermalized Bose-Einstein condensation of photons was observed <cit.>. In previous theoretical studies the equation of motion for a BEC of photons confined by the axially symmetrical trap in a microcavity was obtained. It was assumed that the changes of the cavity width are much smaller than the width of the trap <cit.>. This assumption results in the coordinate-independent effective photon mass m_ph and photon-photon coupling parameter g. In this Paper, we study the local superfluid and normal density profiles for trapped two-dimensional gas of photons with the coordinate-dependent effective massandphoton-photon coupling parameter in a an optical microcavity, filled by adye solution. We propose the approach to study the local BEC and local superfluidity of cavity photon gas in the framework of local density approximation (LDA) in the traps of larger size without the assumption, that total changes of the cavity width are much smaller than the size of the trap. In this case, we study the effects of coordinate-dependent effective massand photon-photon coupling parameter on the superfluid and normal density profiles as well as the profiles of the local temperature of the phase transition for trapped cavity photons. Such approachis useful for the mirrors of smaller radius with the high trapping frequency, which provide BEC and superfluidity for smaller critical number of photons at the same temperature.The paper is organized in the following way. In Sec. <ref>, we obtain the condensate density profile for trapped microcavity photon BEC withlocally variable massand interactions.The expression for the number of particles in a condensate is analyzed in Sec. <ref>. In Sec. <ref>, the dependence of the condensate parameters on the geometry of the trap is discussed. In Sec. <ref>, we study the collective excitation spectrum and superfluidity of 2D weakly-interacting Bose gas of cavity photons. Theresults of our calculations are discussed in Sec. <ref>.The proposed experiment for measuring the distribution of the local density of a photon BEC is described in Sec. <ref>. The conclusions follow in Sec. <ref>.§ THE CONDENSATE DENSITY PROFILE While at finite temperatures there is no true BEC in any infinite untrapped two-dimensional (2D) system, a true 2D BEC quantum phase transition can be obtained in the presence of a confining potential <cit.>. In an infinite translationally invariant two-dimensional system, without a trap, superfluidity occurs via a Kosterlitz-Thouless superfluid (KTS) phase transition <cit.>.While KTS phase transition occurs in systems, characterized by thermal equilibrium,it survives in a dissipative highly nonequilibrium system driven into a steady state <cit.>.The trap for the cavity photons can be formed by the concave sphericalmirrors of the microcavity, that provide the axial symmetry for a trapped gas of photons. Thus the transverse (along xy plane of the cavity)confinement of photons can be achieved by using an optical microcavity with a variable width. Let us introduce the frame of reference, where z-axis is directed along the axis of cavity mirrors, and (x,y) plane is perpendicular to this axis. The energy spectrum E(k) for small wave vectors k of photons, confined in z direction in an ideal microcavity with the coordinate-dependent width L(𝐫), is given by <cit.> E(k) = ħ c/√(ε)√(k_z^2+ k_^2)≈ħπ c ñ/√(ε)L(𝐫) + ħ^2 k_^2/2 m_ph(𝐫) , where m_ph(𝐫) = πħñ√(ε)/[L(𝐫)c] isthe transverse (along xy plane of the cavity) effective coordinate-dependent photon mass, k_z is the wave vector component in z direction along the axis of the cavity, and k_ is the wave vector component in (x,y) plane, perpendicular to the axis of the cavity,c is the speed of light in vacuum, ε is the dielectric constant of the microcavity, and ñ = 1, 2, ….Below we consider the lowest mode ñ = 1. Assuming the energy difference between the quantization levels ñ, caused by the boundaries of the cavity, is much larger than the second term in Eq. (<ref>) for the photon energy, corresponding to a single quantum level, for small k_, we treat our systemas a quasi-two-dimensional system in the (x,y) plane.Let us mention that Eq. (<ref>) is valid only if the radius of the curvature of the mirrors ismuch larger than all other length scales in the system under consideration, i.e.the healinglength ξ etc., which it true for the parameters, used for our calculations, as it can be seen from the estimations below.The Gross-Pitaevskii equation for the wave function of photon condensate ina weakly-interacting (through a dye solution) photon gas can be obtained by following to the standard procedure for derivation of the Gross-Pitaevskii equation for any weakly-interacting Bose gas <cit.> (see also Ref. Elistratov). Note that the Gross-Pitavevskii-like equation for the cavity photonscan be obtained also from the Maxwell equations in a non-linear medium <cit.>. We generalize the Gross-Pitaevskii equation for cavity photons with the coordinate dependent mass and photon-photon interaction.The energy functional of the trapped 2D photon BEC in an optical microcavity, filled by adye solution, can be represented asE[ψ] = ∫[ - ħ c2 π√(ε)ψ^* (𝐫) ∇_𝐫[ ( L(𝐫)∇_𝐫ψ(𝐫) ) ] +(πħ c 1 √(ε) L(𝐫) - μ) | ψ(𝐫)|^2 +g(𝐫)2 |ψ(𝐫)|^4] d𝐫 ,where ψ(𝐫) is the wave function of the photon condensate,μ is the chemical potential of the cavity photons, determined by the laser pumping, g(𝐫) is the photon-photon coupling parameter, corresponding to the photon-photon interaction through the dye molecule. The form of the functional, presented in Eq. (<ref>), impliesessential dependence of the microcavity width L(𝐫) as a function of r. This leads to the effect of spatially variable effective massm_ph(𝐫) = πħñ√(ε)/[L(𝐫)c]. Besides, the dependenceL(𝐫) leads to the confining potentialV(𝐫) = πħ c/(√(ε) L(𝐫)), and also to the spatial dependence of the photon-photon coupling parameter g(𝐫). The energy functional similar to Eq. (<ref>) was used for the particle with the coordinate-dependent mass in Ref. Gevorkyan.Let us mention that the energy functional, presented by Eq. (<ref>), contains the coordinate-dependent coupling parameter g(𝐫), which was not under consideration in Ref. Gevorkyan.Let us mention that taking into account pumping and lossesof photons in a microcavityand assuming that they are not very large, it is reasonable to expect that the spatial distribution of photons will not differ much (at least, qualitatively) from the spatial distribution of photons, obtained from the minimum of the energy functional (<ref>), for the system of the photons in the thermodynamical equilibrium.As it is shown from the numerical solution of the generalized Gross-Pitaevskii equation with pumping and losses inRefs. Berloff,BKKL, small pumping and losses for condensates of bosons lead to only non-essential quantitative change of the distribution of condensate profile in the trap. In the opposite case, the large pumping and losses exceeding some critical values result in quantum turbulence phenomena in the system, which manifest themselves in breaking continuous condensate density distribution and pattern formation <cit.>.The variation of the energy functional (<ref>) with respect to ψ^* (𝐫) gives the following equation for the wave function ψ (𝐫) of the 2D photon condensate in microcavity traps:- ħ c2 π√(ε)∇_𝐫[ ( L(𝐫)∇_𝐫ψ (𝐫) ) ] +(πħ c 1 √(ε) L(𝐫) - μ)ψ (𝐫) + g (𝐫) 2 |ψ (𝐫)|^2 ψ (𝐫) = 0. Following the procedure of the derivation of the Gross-Pitaevskii equation for a cavity photon BEC, one obtains photon-photon coupling parameter g(𝐫). The coordinate-dependent photon-photon coupling parameter g(𝐫) is given by g(𝐫) = πħ c A/√(ε)L(𝐫) , where the parameter Adetermines the strength of photon-photon couplingand depends on the properties of the medium, through which the photons interact (see also Ref. Szymanska).Due to the mirrors' axial symmetry, the wave functionψ (𝐫)of the 2D photon condensate has the axial symmetry. Since in the stationary nonrotating BEC the angular momentum equals to zero, one can rewrite Eq. (<ref>) in the polar coordinates as - ħ c / 2 π√(ε)( L(r) d^2d r^2 + d L(r)d rdd r + L(r)rdd r) φ(r) +(πħ c 1 √(ε) L(r) -μ) φ (r) +g (r) 2 |φ (r)|^2 φ (r) = 0, where φ(r)is the radial component ofthe condensate wave function ψ(𝐫). We consider a trapped Bose gas with a fixed number of particles in a condensate.In general case, the density profile for photon BEC n(r) = |φ(r)|^2 can be obtained by numerical solution of Eq. (<ref>) for a given function L(r). In this paper, we focus on obtaining an analytical expression for the profile of the microcavity photon BEC. For a large number of condensate photons <cit.>, one can use for this profilethe Thomas-Fermi approximation <cit.> analogously to applicability of the Thomas-Fermi approximation for other physical realizations for bosons. The conditions of applicability of Thomas-Fermi approximation are discussedin Appendix <ref>. In the Thomas-Fermi approximation <cit.>, neglecting the gradient terms acting on the condensate wave function in Eq. (<ref>), and, assuming the slowly-varying width of the cavity, for the chemical potential μ, satisfying to the following condition:μ > πħ c/√(ε)L(r) . one getsπħ c 1 √(ε)L(r) -μ+g (r)2 n^2(r)= 0, where n(r) = |ψ(𝐫)|^2is the condensate density. As follows from Eq. (<ref>), n(r) is a slowly varying function, defined bythe slowly varying function L(r):n(r) = 2(μ - πħ c/[√(ε) L(r)])/g(r) = 2μ - 2 m_ph(r) c^2/ε/g(r) . Therefore, the photonic BEC exists only if the laser pumping provides the chemical potential μ that satisfiesthe condition given by Eq. (<ref>). For an axially symmetrical trapthe maximal radiusof the BEC spot r_0 is definedbyμ = πħc/√(ε)L(r_0). The corresponding value ofr_0 is valid at r_0<R, where R is the radius of thetrap, which is defined by the shape of the mirrors. § THE NUMBER OF PARTICLES IN A CONDENSATEWe assume that the axially symmetrical trap has a harmonic shape. As it is shown inRef. Klaers_Nf, when the distance from the axis of the microcavity to the mirror is essentially smaller than the radius of the spherical mirror, the harmonic approximation for the shape of mirrors forming a microcavity is valid. Let us derive the expression for the chemical potential μ for the harmonic trap. For the harmonic trap V_0 + γ r^2/2 = πħ c/[√(ε)L(r)] we have L(r) = πħ c/[√(ε)(V_0 + γ r^2/2)], where V_0 = πħ c/[√(ε)L(r=0)],and γ istheconstant, determining the curvature of the harmonic trap.In the approximation, applied for theharmonic trap,γ isdefined as γ = m_1Ω^2, where m_1 = 6.7 × 10^-36 kg is the cavity effective photon massat r=0, givenin Ref. natcom, andΩ is thefrequency of the harmonic trap, defined as Ω =c √(2/(L(r=0) R_m)) <cit.>.The radius of the curvature of the mirrors R_m is related to the trapping frequency Ω as <cit.> R_m = 2c^2/L(r=0)Ω^2 .Therefore, for the harmonic trap, one obtains m_ph (r) = ε/c^2(V_0 + γ r^2/2) , and g(r) =A (V_0 + γ r^2/2). For the density profile of the photonic BEC at r<r_0 from Eq. (<ref>) one gets:n(r) = 2/A(μ/V_0 + γ r^2/2 - 1 ). The normalization condition for the densityprofile of the photonic BEC N_BEC = ∫ n(r) d 𝐫 and Eq. (<ref>) lead to N_BEC = 4πμ/Aγln[ 1+ γ r_0^2/2V_0] - 2π r_0^2/A. We find the equation for r_0 using Eq. (<ref>) from the condition n(r_0)=0: μ = V_0 + γ r_0^2/2 , which results in r_0 = √(2(μ - V_0)/γ). For the harmonic trap, the chemical potential μ, corresponding to the existence of the BEC, has to satisfy to the condition, which follows from Eq. (<ref>): μ≥V_0 + γ r^2/2 , where r≤ r_0 (see Eq. (<ref>)). Substituting Eq. (<ref>) into Eq. (<ref>), we getthe following expression, which connects the number of photons in BEC N_BEC withthe chemical potential μ and other parameters of the system: N_BEC = 4πμ/Aγln[μ/V_0]- 4π(μ - V_0)/Aγ = 4πμ/Aγ[ln[μ/V_0]+ V_0/μ - 1 ].Eq. (<ref>) will be used to calculate the spatial condensate density profile, applying Eq. (<ref>) (see Sec. <ref>). § DEPENDENCE OF THE CONDENSATE PARAMETERS ON THE GEOMETRY OF THE TRAPThe radius of the BEC spot r_0is determined by the number of photons in BEC N_BEC and the parameter of strength ofphoton-photon coupling A. However, since the photon-photon interaction strength is currently not well known, we obtain the parameter of strength of photon-photon coupling A, using the experimental results <cit.>. The experiment <cit.> was maintained at the finite temperature T= 300K, when the radius of the photon spot is different from the radius of BEC spot, while at T =0 K, assuming almost all photons belong to BEC, the radii of the photon spot and BEC spot are equal.For our calculations in the framework of the Thomas-Fermi approximation at T= 0 K, we assume that the radius of the photon spot r_0 corresponds by the order of magnitude tothe one, reported in the experiment <cit.>. The parameter of strength of photon-photon coupling Awas estimated by substituting r_0 = 20 μ m, andL(r=0) =1.7 × 10^-6 m, γ = 7.929 × 10^-13 J/m^2, N_BEC = 1.7 × 10^5, ε = 2.045 from Ref. natcom into Eqs. (<ref>) and (<ref>).Then, one obtains A = 2.87 × 10^-5 μ m^2.According to the experiment, at T = 300K BEC exists at N_BEC > N_c, where N_c = 8.5 × 10^4 <cit.>.The experiments have been performed for N_BEC in the range from 3 × 10^4 up to 5.5 × 10^5 <cit.>. For our calculations we use N_BEC = 1.7 × 10^5. While for the experimental parameters <cit.>, implying Ω =2 π× 36.5GHz andγ = 3.524 × 10^-13 J/m^2, the Thomas-Fermi approximation is not applicable, we use for our calculations Ω =2 π× 54.75GHz and γ = 7.929 × 10^-13 J/m^2,where the Thomas-Fermi approximation is valid, as it is demonstratedin Appendix <ref>.Let us mention that using Ω =2 π× 54.75 GHz larger than the value used in Ref. Klaers_Nature corresponds to slightly smaller radius R_m of the curvature of the mirrors than in Ref. Klaers_Nature. Thus, for the parameters, used for our calculations, we have R_m =0.444 m.In Ref. natcom,the mirrors of the radius of curvature R_m = 1mhave been used. The advantage of using the mirrors of smaller radius with higher trapping frequenciesis the increase of theconstantγ,which results inhigher critical temperature of BEC for the same number ofphotons,because the critical temperature T_BEC^(0) of BEC for a non-interacting Bose gas can be qualitatively estimated as (see, e.g., Ref. Bagnato) T_BEC^(0)∼ħ/π k_B[6 γ N_BEC/m_ph(r=0)]^1/2 , where k_B is Boltzmann constant. Thus at fixed temperature T the critical number of photons, corresponding to the BEC transition, is inversely proportional to theconstant γ. Using Eq. (<ref>), one obtains at T = 300K, for the experimental constantγ = 3.524 × 10^-13J/m^2, the estimation for the critical number of photons for the BEC transitions as N_c∼ 6.694 ×10^3. At the same parameters,forused in our calculationstheconstant γ = 7.929 × 10^-13 J/m^2, the critical number of photons for the BEC transitions can be estimated as N_c∼ 2.975 ×10^3.The latter demonstrates that for the microcavitywith smaller radius of the mirrors, implying according to Eq. (<ref>) larger Ω, and, therefore, larger constant γ, BEC can be achieved for smaller critical number of photons at the same temperature. Let us mention that using the mirrors of the radius R_m = 0.444m, which corresponds to used in our calculations the constant γ = 7.929 × 10^-13J/m^2, does not break the validity of Eq. (<ref>), since this radius of the curvature of the mirrors ismuch larger than all other length scales in the system under consideration. Let us mention that the effect of taking into account the spatial dependence of the cavity effective photon mass m_ph (r) and the photon-photon coupling parameter g(r) can be illustratedby the following ratios, calculated with the parameters, introduced above: m_ph(r=r_0)/m_ph(r=0) = g(r=r_0)/g(r=0) = 1.004. At the location of the condensatethe change of the cavity width can be illustrated by the ratio: L(r=r_0)/L(r=0) = 0.996. For the relatively small radius of the mirror R_m = 0.035m, implying Ω =2 π×277.9GHz and γ = 2.043 × 10^-11 J/m^2, one obtains m_ph(r=r_0)/m_ph(r=0) = g(r=r_0)/g(r=0) = 1.1.In this case of the mirrors of such small radius, atthe location of the condensate the change of the cavity width can be illustrated by the ratio: L(r=r_0)/L(r=0) = 0.909. Therefore, for the mirrors of the smaller radius, the coordinate dependence of the effective photon mass and the photon-photon coupling parameter is stronger. Note that formation of the traps for the cavity photons, located in the convexities of this small radius on a plane mirror, seems to be possible. Besides in this paper we are interested in small radius of mirrors, because it corresponds to smaller critical number of photons for BEC at fixed temperature and microscopical traps can be used for quantum technology applications. According to Eq. (<ref>), the radius of the BEC spot r_0 dependson the parameter of strength ofphoton-photon coupling A and the number of photons in BEC N_BEC.We study thephoton-photon interaction strength, assuming the substitution of the experimental values r_0 and N_BEC into Eq. (<ref>). The parameter of strength of photon-photon coupling A, required to achieve the certain radius of the BEC spot r_0 at the defined total number of photons in the BEC N_BEC, is shown in Fig. <ref> for various r_0 and N_BEC. According to Fig. <ref> atthe fixed N_BEC,larger A is required to achieve larger r_0, and for larger N_BECsmaller A is required to achieve the fixed spot radius r_0. The justification of the Thomas-Fermi approximation and slowly varying cavity width approximation for the parameters, used in our calculations is discussed in Appendix <ref>. § SUPERFLUIDITY OF THE MICROCAVITY PHOTONS Below we study the collective excitation spectrum and superfluidity of 2D weakly-interacting Bose gas of cavity photons.While at zero temperature, the entire system is superfluid, at the non-zero temperatures below the KTS phase transition temperature, in a 2D superfluid the normal component appears in the cores of the vortices, with the superfluid, circulating around these cores <cit.>. Below we consider the axially symmetrical trap, where the size of the condensateis essentially larger than the average distance between two vortices. The maximal density of the vortices is estimated as n_v^(max)≲ r_v^-2, where r_v is the size of the core of a vortex. The average distanceξ_avbetween the vortices cannot be smaller than the size of the core of a vortex r_v: ξ_av≳r_v (see, e.g., Ref. Larkin). Since the size of the core of a vortex is of the order of the magnitude of the healing length r_v≃ξ <cit.>, thesize of the condensateis larger than the average distance between two vortices, when the inequality ξ < r_0 holds, whichdoes not contradict to the condition of validity of the Thomas-Fermi approximation, presented by Eq. (<ref>).Therefore, one can estimate the local temperature of Kosterlitz-Thouless phase transition, using the parameters, obtained from the Thomas-Fermi approximation.Now we will analyze the spectrum of the collective excitations in the superfluid of microcavity photons.For small momenta(P = ħ k_) P ≪√(2m_ph(r) g(r) n(r)) and small temperatures, the energy spectrum of the quasiparticles ϵ (P,r) is given by <cit.>ϵ (P,r) ≈ c_s(r)P, where c_s(r) is the sound velocity in the Popov approximation <cit.>:c_s(r) =√(g(r)n(r)/m_ph(r)) .For the harmonic trap, substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>), one obtainsc_s(r) = c √(A n(r)/ε) = c/√(ε)√(2 (μ/V_0 + γ r^2/2 - 1 )) . The dilute photon gas in an optical microcavity, filled by adye solution, forms a 2D weakly interacting gas of bosons with the pair short-range repulsion, caused by the photon-photon interaction through the dye molecule. Since the spectrum of a weakly interacting gas of the cavity photons is a linear sound spectrum, satisfying the Landau criterium of superfluidity <cit.>, superfluidity of the cavity photons can be observed in the trap. Therefore, at small temperatures there are two components in the trapped gas of cavity photons: the normal component and the superfluid component. We obtain the number of photons in the superfluid component as a function of temperature applying the procedure similar to the one used for the microcavity exciton polaritons in a 2D trap <cit.>. We define the total number of particles in the superfluid componentN_s≡ N - N_n, where N_n is a total number of particles in the normal component. N_n is defined analogously to the procedure applied for the definition of the density of the normal component in the infinite system n_n <cit.> using the isotropy of the trapped cavity photonic gas instead of the translational symmetry for an infinite system. According to the Landau theory of quasiparticles, at finite temperatures the non-interacting quasiparticles, contributing to the normal component, are characterized by the same energy spectrum as the weakly-interacting particles at the zero temperature <cit.>. The Landau theory of quasiparticles is valid at low temperatures, when the number of particles in the normal component is much less than the total number of particles: N_n≪ N.The temperatures when our approach is applicable must be much smaller than the critical phase transition temperatures. Therefore, the Landau theory of quasiparticles is valid at these temperatures.Our estimations using Eq. (<ref>) show that the possible transition temperatures even can exceed room temperatures for realistic experimental parameters.Assuming an axially symmetric 2D trap for microcavity photons, we imagine that a “gas of quasiparticles” rotates in the liquid in the plane perpendicular to the axis of the trap with some small macroscopic angular velocity ν. In this case, the distribution function of a gas of quasiparticles can be obtained from the distribution function of a gas at rest by substituting for the energy spectrum of the quasiparticles ϵ (P) - 𝐌ν, where 𝐌 = 𝐫×𝐏 is the angular momentum of the particle. Assuming Pr/ħ≫ 1, we apply the quasiclassical approximation for the angular momentum: M ≈ Pr and ϵ (M,r) = c_sP = r^-1c_s(r)M. The total angular momentum in a trap per unit of area 𝐌_ tot(r) is given by 𝐌_ tot(r) = ∫d^2M/(2πħ r)^2𝐌 n_B(ϵ(r,L) - 𝐌ν), where we assume that at small temperatures the quasiparticles are noninteracting, and theyare described by the Bose-Einstein distribution function n_B(ϵ) = (exp[ϵ/(k_BT)] - 1)^-1. For small angular velocities, n_B(ϵ - 𝐌ν) can be expanded in terms of 𝐌ν. Thenin the linear approximation we get 𝐌_tot(r) = - ∫d^2M/(2πħ r)^2𝐌(𝐌ν)∂ n_B(ε)/∂ϵ . Assuming that only quasiparticles contribute to the total angular momentum, we define the density of the normal component n_n(r,T) by M_tot(r) = n_n(r,T)M_0(r), where M_0(r) = m_ph(r)rν is the angular momentum of one quasiparticle. From Eq. (<ref>), the local coordinate-dependent density of the normal componentis obtained as n_n(r,T) = 3 ζ (3) k_B^3 T^3/2 πħ^2 c_s^4(r)m_ph(r) = 3 ζ (3) k_B^3L(r)c T^3/4 π^2ħ^3√(ε) c_s^4(r) . Let us mention that the density of the normal component n_n(r,T) does not depend on the angular velocity of rotation ν, because n_n(r,T) is a linear response of the total angular momentum in a trap per unit of areaon the external angular velocity. Hence, n_n(r,T) is determined only by the equilibrium properties of the system. The temperature dependence of the local density of the superfluid component n_s (r,T) is given by n_s (r,T) = n (r) - n_n (r,T), where n(r) is the profile of the total photon density, which almost does not change at low temperatures. Assuming that at low temperatures the majority of photons belong to BEC, and substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>), we obtain the temperature dependence of the local coordinate-dependent density of the superfluid component: n_s(r,T)= 2(μ - πħ c/ √(ε) L(r))/g(r) - 3 ζ (3) k_B^3c L(r) T^3/4 π^2ħ^3√(ε)c_s^4(r) . For the total number of photons in the normal component we obtainN_n(T) = 2π∫_0^r_0 n_n(r,T) r dr =∫_0^r_03 ζ (3) k_B^3 T^3/ħ^2 c_s^4(r)m_ph(r) r dr,whereζ (z) is the Riemann zeta function (ζ (3) ≃ 1.202), k_B is Boltzmann constant, and we assume that at low temperatures almost all photons are in the condensate. For the total number of photons in the superfluidcomponent we getN_s(T) = N - N_n(T) =N - ∫_0^r_03 ζ (3) k_B^3 T^3/ħ^2 c_s^4(r)m_ph(r) r dr.We assumethat thewidth of the cavity L(r) very slowly depends on the coordinate on scales of the order of the mean separation between vortexes (but the total change of L(r) in the trap is essential). The superfluid-normal phase transition in the 2D system is the Kosterlitz-Thouless transition <cit.>, and the local coordinate-dependent temperature of this transition T_c in a two-dimensional microcavity photon system is determined by theequation <cit.>: T_c (r) = πħ ^2 n_s (r,T_c(r))/2 k_B m_ph(r) .We can use Eq. (<ref>)only in the framework of the quasilocal approximation, assuming very slow changes of L(r), such that the characteristic length of the changes in L(r) is much less than the average distance between the vortices in the superfluid.Substituting Eq. (<ref>) for the density n_s(r,T) of the superfluid component into Eq. (<ref>), we obtain an equation for the local Kosterlitz-Thouless transition temperature T_c(r). The solution of this equation is T_c (r)= [( 1 + √(32/27(m_ph(r) k_BT_c^0(r)/πħ^2 n(r))^3 + 1))^1/3 - ( √(32/27( m_ph(r) k_BT_c^0(r)/πħ^2 n(r))^3 + 1) - 1 )^1/3] T_c^0(r)/ 2^1/3, where T_c^0(r) is the local temperature at which the superfluid density vanishes in the mean-field approximation at the points with the coordinate vector r (i.e., n_s(r,T_c^0(r)) = 0), T_c^0 (r) = 1/k_B(2 πħ^2 n(r) c_s^4(r)m_ph(r) /3 ζ (3))^1/3 .Eqs. (<ref>), (<ref>)generalize the results of Ref. BKZ for the coordinate-dependent particle mass and photon-photon interaction.§ RESULTS AND DISCUSSION Since T_c(r) depends on the coordinate r, at fixed finite temperatures T above the minimal possible critical temperature T_c^min=0K at the edge of the BECand below the maximal possible critical temperature T_c^max, e.g.,0 K < T < T_c^max, there is the superfluid (S) phase in the region of the system where T<T_c(r) (in S phase the superfluid component coexists with the normal component), andthe normal (N) phase in the other regions of the system where T>T_c(r) (with only normal component). At the zero temperature, the entire system is superfluid, and at the temperatures above T_c^max,the entire system is normal. SinceT_c(r) is a decreasingfunction of m_ph(r), and m_ph(r) is a decreasing functionof the width of the microcavity L(r), T_c(r) increases, ifL(r) increases. If we consider the axially symmetrical trap, where L(r) is a decreasing function of r, then T_c(r) decreases, when r increases. Therefore, for the axially symmetrical trap we have T_c^max = T_c(0)≡ T_c(r=0). If we consider the temperature T_1 in the range 0K < T_1 < T_c(r=0), then we have T_c(r_1) = T_1, where T_c(r_1) is the critical temperature, corresponding to the width of the cavity L (r_1), which can be found from the solution of Eq. (<ref>) with respect to L(r), substituting T_c = T_1. The correspondingr_1 is the radius of the spot with the superfluid and normal componentsinside and only the normal component outside, filling the ring with the width r_0 - r_1(see the insert in Fig. <ref>). While the total density and superfluid density are monotonously decreasing functions of r due to the increase of the effective photonic mass with the increase of r, thenormal density is a non-monotonous function of r. At 0<r<r_1, n_n(r) = n(r) - n_s(r) increases with the increase of r due to the decrease of n_s(r), but at r_1≤ r ≤ r_0, n_n(r) = n(r) is a decreasing function of r. The profiles for the total concentration n(r), theconcentrations of the normaln_n(r) andsuperfluidn_s(r) components at the temperature T = 300K are showninFig. <ref>. The spatial distributions of the superfluid and normal components in the trap are demonstrated inthe insert inFig. <ref>, where the gradient of the colorsreflects the local concentrations of the superfluid and normalcomponents. According to Eqs. (<ref>) and (<ref>), the local mean-field phase transition temperature T_c^0 (r) and local Kosterlitz-Thouless transition temperature T_c (r) decrease with the increase of the distance from the center of the trap r, andboth T_c^0 (r) and T_c (r) vanish at the edge of the trap. Everywhere inside a trap, the local mean-field phase transition temperature T_c^0 (r) is greater than the local Kosterlitz-Thouless transition temperature T_c (r), and the difference betweenT_c^0 (r) and T_c (r) decreases with the increase of the distance from the center of the trap r.The superfluidity of cavity photons can be observed experimentally analogously to the system of microcavity polaritons: (1) by observing the photon condensate flow induced by initial gradient of density through the obstacles impurities, where the superfluid flow does not experience any scattering at the obstacles, and(2) by observing the quantized vortices in the system of cavity photons. The experimental evidence for superfluid motion of exciton polaritons in semiconductor microcavity was reported in Ref. Amo. The superfluidity of microcavity exciton polaritons was studied in terms of the Landau criterion and manifested itself as the suppression of scattering from defects when the flow velocity was slower than the speed of sound in the fluid <cit.>. We suggest to generalize the methods used to observe the superfluidity of microcavity polaritons to observe the superfluidity of the photons in an optical cavity, filled with molecular medium, that are excited by laser light.§ PROPOSED EXPERIMENT FOR MEASURING THE DISTRIBUTION OF THE LOCAL DENSITY OF A PHOTON BECWe propose the following experiment relevant to BEC and superfluidity of trapped microcavity photons. These experiments are based on the observation oflocal distribution of photons, escaping the optical microcavity.The fiber photodetector, formed by a single fiber probe, attached to a piezo scanner, can bemovedabove different regions of the mirrors at the distance from the mirrors about several microns, which is much less than the size of the BEC. These photodetectors can register the local intensity of the lines of the angular distribution of light, which is proportional to the number of the photons, escaped from the nearest to the detector region of the microcavity with the given angle αbetween the momentum of photons escaping the optical microcavity and the normal to the microcavity. In the absence of photon flow, the average angle between the momentum of photons escaping the optical microcavity and the normal to the microcavity is α̅=0, because the angular distribution is symmetrical. The photons, escaping fromthe BEC inside the circle of the radius r_1, form the sharp bright spot with very narrow line, registered by the fiber photodetector, because the photons from BEC are characterized by (almost) zero momentum componentP = 0 in (x,y) plane, normal to the axis of the cavity. Hence, these photons from the BEC escape in the direction normal to the plane of the microcavity. The photons escape from the non-condensate with various P, characterized by various angles α between the momentumand the normal to the microcavity. Hence, photons escape from the non-condensate, forming line broadening. Therefore, photons escaping from the BEC region inside the circle of radius r_1 will form very narrow line of very high intensity, corresponding to the BEC, and this narrow line will be surrounded by the broad lower intensity line, corresponding to the non-condensate. The photons, escaping from the ring of the inner radius r_1 and the outer radius r_0 form onlythe broad lower intensity line, because there is no BEC inside this ring.The scheme of this possible experiment is presented in Fig. <ref>. A fiber based detector located near the mirror can be scanned along the surface to register the spatial distribution of photons, escaping the microcavity, or the detectors can be also located inside the microcavity if the corresponding change of the microcavity quality factor would be negligible. The quasiparticles, forming the local normal component, contribute to the local line broadening, which affect the local average deviation of the tangent of the angle between the path of the escaping photon and the normal to the microcavity, defined as Δtanα (r)= √(P^2(r))/p_z(r) . In Eq. (<ref>) p_z (r) = πħ/L(r)is the momentum component in z direction along the axis of the cavity, and the average squared momentum component in (x,y) plane P^2is given by P^2(r) =1/n(r)∫ P^2 n_B(ϵ)d^2P/(2πħ )^2 , where ϵ = c_s(r)P for 0 ≤ r <r_1. In Eq. (<ref>) it was assumed that the broadeningof the photon angle distribution for a weakly interacting photon gas is formed only by the contribution of quasiparticles. For the region of the trap with the superfluid phase (0 ≤ r<r_1) after integration in Eq. (<ref>) one obtains: P^2(r) = Γ(4) ζ(4) k_B^4T^4/2 πħ^2 c_s^4(r) n(r) , where Γ (n) is the gamma function(Γ(4) = 6) and ζ (z) is the Riemann zeta function (ζ (4) ≃ 1.0823). The broadening of the photon angle distribution for the region0 ≤ r <r_1 (where the superfluid component exists) is formed by the contribution of quasiparticles according to Eq. (<ref>). Substituting Eq. (<ref>) into Eq. (<ref>), one obtainsthe following expression: Δtanα (r,T)= (3ζ (4)/π n(r))^1/2k_B^2T^2/ħ c_s^2(r)p_z(r) . Substituting Eq. (<ref>)into Eq. (<ref>), we get n_n(r,T) = ζ (3)/4(3 n(r)/ζ (4)πε)^1/2c k_B T/ħ c_s^2(r)Δtanα (r,T), Therefore, one obtains the profile of the density of the normal component n_n(r)through an experimental measurement of the profile of Δtanα (r,T).The profiles for Δtanα in a trap for the total number of photons N = 10^5 at T = 300K for the region, where the superfluid component exists, are presentedinFig. <ref>for different parameters of strength of photon-photon coupling A andradii of the photon spot r_0.According toFig. <ref>, Δtanαincreases up to the edge of the superfluid spot r_1 and the latter one increases with theincrease of A and r_0.The dependence of the profile ofΔtanα on the total number of photonsNand the radius of the photon spot r_0at T=300K is presented in Fig. <ref>.In Fig. <ref>, N is the total number of photons which equals to the number of photons in BEC N_BEC at T = 0K, given by Eq. (<ref>). According toFig. <ref>, Δtanα decreases with the increase of N at the fixed radius of the photon spotr_0. According toFigs. <ref> and <ref>,Δtanα always increases with r at the fixed r_0, N, A, and T, because the concentration of the normal component always increases with r in the region 0≤ r < r_1 according to Fig. <ref>, because only the quasiparticles from the normal component contribute to Δtanα. It follows from Figs. <ref> and <ref>, that by measuring Δtanα experimentally, one can obtainthe parameter of strength of photon-photon coupling A, and, therefore, one canstudy photon-photon interaction. § CONCLUSIONS In conclusion, we considered the BEC of trapped two-dimensionalgas of photons with the coordinate-dependent effective mass and photon-photon coupling parameter in an opticalmicrocavity, filled by adye solution,with thephotons being confined due to the coordinate-dependentwidth of the optical microcavity. The coordinate dependence of cavity photon effective mass and photon-photon coupling parameterdescribes the photons in a cavity with the mirrors of smaller radius with the higher trapping frequency, which provides BEC and superfluidity for smaller critical number of photons at the same temperature. The photon condensate density profile was obtained in theThomas-Fermi approximation. The condition for the chemical potential, corresponding to the trapped photonic BEC, was formulated. The local coordinate-dependent densities of the superfluid and normal components of the trapped photon system were obtained at a fixed temperature. The profiles of the superfluid and normal regions were presented at a fixed temperature.The profiles for the local mean-field phase transition temperature T_c^0 (r) and local Kosterlitz-Thouless transition temperature T_c (r) for trapped microcavity photonswere derived. The experiments to measure the density profiles for the normal and superfluid components were suggested.The authors are thankful toV. Menon for the useful discussion. The work was supported by PSC CUNY under Grant No. 67577-00 45. Yu. E. L. was supported by Program of Basic Research of National Research University HSE.§ VALIDITY OF THE THOMAS-FERMI APPROXIMATION FOR THE PARAMETERS OF THE CALCULATIONSIn this appendix, we discuss validity of the Thomas-Fermi approximation and the conditionof negligibility of the derivatived L(r)/d r for the parameters of the calculations. We assume that polariton-polariton interaction is so week that the mean field approximation is valid.Local density approximation (LDA) is applicable when the characteristic condensate inhomogeneity length, which is the characteristic size of the condensate r_0,is much larger than all parameters of the problem with the length dimensionality such as the healing length ξ. When the mean field approximation and the local density approximation are applicable, then Thomas-Fermi approximation, which we are using, is valid.Let us justify the validity of the Thomas-Fermi approximation that is used above. The condition of applicability of the Thomas-Fermi approximation implies neglecting all terms with derivatives in Eq. (<ref>). Negligibility of these terms, except the term with d L(r)/d r, can be achieved, if the radius of the photon BEC spot r_0is larger than the characteristic length of decrease of the condensate wave function φ (r), which is the healing length ξ. The healing length ξ, which corresponds to the characteristic length of the changes of the condensate wave function in the Gross-Pitaevskii equation.The healing length ξ is defined as <cit.> ξ = ħ/√(2m_phgn̅) , where n̅= N_BEC/(π r_0^2) is the average 2D concentration of the photons. Besides, negligibility of the term, containing d L(r)/d r, can be achieved at the additional condition, when the change of the microcavity width Δ L = L(r=0) - L(r=R)is sufficiently smaller than the transverse size of the microcavity R. The Thomas-Fermi approximation is applicable if the size of the condensate r_0 is much larger than the healing length ξ. Using N_BEC = πn̅r_0^2, the inequalityξ < r_0 turns into the inequality N_BEC > πħ^2/2m_phg .Substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>), and, assuming r=0 (which increases the r.h.s. of Eq. (<ref>)), one obtains the following estimate for N_BEC whenthe Thomas-Fermi approximation is applicable: N_BEC > L^2(r=0)/A . For the parameters, used for our calculations, from Eq. (<ref>) one obtainsN_BEC >10^5. Therefore, for the used value N_BEC = 1.7 × 10^5the Thomas-Fermi approximation can be applied.Substitutingat r=0Eqs. (<ref>) and (<ref>)into Eq. (<ref>), and assuming n̅ = N_BEC/(π r_0^2), one obtainsξ = r_0L(r=0)/√(π A N_BEC) , which for the parameters, used in our calculations, results in ξ = 8.691 × 10^-6 m. Therefore, for the system under consideration the inequality ξ < r_0 holds. The another method to check the validity of the Thomas-Fermi approximation is to substitute the condensate wave function in the formφ (r) = φ_0(r) + δ (r) into Eq. (<ref>), where φ_0(r) is the condensate wave function in the Thomas-Fermi approximation, satisfying Eq. (<ref>), and δ (r) is the small perturbation to the condensate wave function, caused to the deviation from the Thomas-Fermi approximation. Assumingthat the derivatives of δ (r) vanish, one obtains F(r)= - ħ c / 2 π√(ε)( L(r) d^2d r^2 + d L(r)d rdd r + L(r)rdd r) φ_0(r) =- ħ c D(r) / 2 π√(ε) , whereφ_0 (r) = √(n(r)), n(r) is given by Eq. (<ref>), and D(r) = D_1(r) + D_2(r) + D_3(r) , where D_1(r) = L(r)/2φ_0(r)[d^2n(r)/d r^2- 1/2n (r)(d n(r)/d r)^2], where d n(r)/d r = - 2μγ r/A(V_0 + γ r^2/2)^2 ,d^2 n(r)/d r^2 =2μγ(3/2γ r^2 - V_0)/A(V_0 + γ r^2/2)^3 ,D_2(r) = μ L(r) γ^2 r^2/A φ_0(r)(V_0 + γ r^2/2)^3 ,D_3(r) = -μ L(r) γ/2A φ_0 (r) (V_0 + γ r^2/2)^2 .Substituting the expansion φ (r) = φ_0(r) + δ (r) and Eq. (<ref>) intoEq. (<ref>), applying Eq. Eq. (<ref>) for φ_0(r), and keeping only linear terms with respect toδ (r), one obtainsδ (r) in the following formδ (r) = - F(r)/πħ c/√(ε)L(r) -μ +3 2 g(r) n(r) . Substituting the parameters used for our calculations into Eq. (<ref>), one obtains |δ (r=0)/φ_0 (r=0) | = 0.071 ≪ 1. Since it follows from Eq. (<ref>) that |δ (r=0)/φ_0 (r=0) | = 0.071 ≪ 1, we conclude that the Thomas-Fermi approximation is valid for the parameters, used for our calculations.At the distance from the center of the trap r= 7μ m, this ratio becomes |δ (r)/φ_0 (r) | = 0.1. Let us mention that we present the ratio δ/φ_0 in the center of the trap, since closer to the edges of the spot it becomes close to one due to the well known fact that closer to the edges of the spot the Thomas-Fermi approximation is not valid, and the Gross-Pitaevskii equation has to be solved <cit.>.While in the framework of the Thomas-Fermi approximation the condensate density vanishes at the edges of the spot, the solution of the Gross-Pitaevskii equation demonstrates the asymptotic decrease of the condensate profile <cit.> due to the essential role of the spatial derivatives of the condensate wave function. Our assumption about the slowly-varying width of the cavity corresponds to the inequalityΔ L ≪ R, where Δ L is the change of the width of the cavity, defined as Δ L =L(r=0) - L(r=R) = πħ c/√(ε)V_0 - πħ c/√(ε)(V_0 + γ R^2/2) . In Eq. (<ref>) R is the radiusof the microcavity, which is R = 0.5mm <cit.>. For the parameters, used for our calculations, Eq. (<ref>) results in Δ L = 1.204 × 10^-6 m.99Pitaevskii L. Pitaevskii and S. Stringari, Bose-Einstein Condensation (Clarendon Press, Oxford, 2003).Klaers_Nature J. Klaers, J. Schmitt, F. Vewinger, and M. Weitz, “Bose-Einstein condensation of photons in an optical microcavity,” Nature468, 545 (2010).Klaers_Nf J. Klaers, F. Vewinger, and M. Weitz, “Thermalization of a two-dimensional photonic gas in a white wall photon box,” Nature Physics6, 512 (2010).NymanJ. Marelic and R. A. Nyman, “Experimental evidence for inhomogeneous pumping and energy-dependent effects in photon Bose-Einstein condensation,” Phys. Rev. A91, 033813 (2015).natcom T. Damm, J. Schmitt, Q. Liang, D. Dung, F. Vewinger, M. Weitz, andJ. Klaers, “Calorimetry of a Bose-Einstein condensed photon gas,” Nat. Commun.7, 11340 (2016). Einstein A. Einstein, “Quantentheorie des einatomigen idealen Gases,”Sitzungsberich. Preussisch. Akad. Wissenschaft. 1, 3 (1925).Snoke_Girvin D. W. Snoke and� S. M. Girvin, “Dynamics of Phase Coherence Onset in Bose Condensates of Photons by Incoherent Phonon Emission,” J. Low. Temp. Phys.171, 1 (2013). Stoof E. C. I. van der Wurff, A. W. de Leeuw, R. A. Duine, and H. T. C. Stoof, “Interaction Effects on Number Fluctuations in a Bose-Einstein Condensate of Light,” Phys. Rev. Lett.113, 135301 (2014). Kirton1 P. Kirton and J. Keeling, “Nonequilibrium Model of Photon Condensation,” Phys. Rev. Lett. 111, 100404 (2013).Kirton2 P. Kirton and J. Keeling, “Thermalization and breakdown of thermalization in photon condensates,” Phys. Rev. A91, 033826 (2015).Kirton3 J. Keeling and P. Kirton, “Spatial dynamics, thermalization, and gain clamping in a photon condensate,” Phys. Rev. A93, 013829 (2016). Klaers_2015 J. Schmitt, T. Damm, D. Dung, F. Vewinger, J. Klaers, and M. Weitz, “Thermalization kinetics of light: From laser dynamics to equilibrium condensation of photons,” Phys. Rev. A92, 011602(R) (2015). Szymanska R. A. Nyman andM. H. Szymańska, “Interactions in dye-microcavity photon condensates and the prospects for their observation,”Phys. Rev. A89, 033844 (2014). Bagnato V. Bagnato and D. Kleppner, “Bose-Einstein condensation in low-dimensional traps,” Phys. Rev. A44, 7439 (1991). Nozieres P. Nozières, “Some Comments on Bose-Einstein Condensation,”inBose-Einstein Condensation, A. Griffin, D. W. Snoke, and S. Stringari, Eds. (Cambridge Univ. Press, Cambridge, 1995), p.p. 15-30.Kosterlitz J. M. Kosterlitz and D. J. Thouless, “Ordering, metastability and phase transitions in two-dimensional systems,” J. Phys. C6, 1181 (1973); D. R. Nelson and J. M. Kosterlitz, “Universal Jump in the Superfluid Density of Two-Dimensional Superfluids,” Phys. Rev. Lett.39, 1201 (1977). Dagvadorj G. Dagvadorj, J. M. Fellows, S. Matyjaśkiewicz, F. M. Marchetti, I. Carusotto, and M. H. Szymańska, “Nonequilibrium Phase Transition in a Two-Dimensional Driven Open Quantum System,” Phys. Rev. X5, 041028 (2015). Elistratov A. A. Elistratov and Yu. E. Lozovik, “Coupled exciton-photon Bose condensate in path integral formalism,” Phys. Rev. B93,104530 (2016).Gevorkyan Zh. S. Gevorkyan and Yu. E. Lozovik, “Systems of particles with random mass,” J. Phys. C21, 2181 (1988).Berloff J. Keeling and N. G. Berloff, “Spontaneous rotating vortex lattices in a pumped decaying condensate”, Phys. Rev. Lett.100, 250401 (2008).BKKLO. L. Berman, R. Ya. Kezerashvili, G. V. Kolmakov, and Yu. E. Lozovik, “Turbulence in a Bose-Einstein condensate of dipolar excitons in coupled quantum wells”, Phys. Rev. B86, 045108 (2012).arxiv J. Schmitt, T. Damm, D. Dung, F. Vewinger, J. Klaers, and M. Weitz, “Bose-Einstein Condensation of Photons versus Lasing and Hanbury Brown-Twiss Measurements with a Condensate of Light,” Laser Spectroscopy: Proceedings of the XXII International Conference (World Scientific, 2016); arXiv:1606.09015.Onsager L. Onsager, “Statistical Hydrodynamics,” Nuovo Cimento Suppl.6, 279 (1949).FeynmanR. P. Feynman, “Application of Quantum Mechanics to Liquid Helium,” Prog. Low Temp. Phys.1, 17 (1955).HohenbergP. C. Hohenberg and P. C. Martin, “Microscopic Theory of Superfluid Helium,” Ann. Phys.34, 291 (1965).Larkin G. Blatter, M. Y. Feigel'man, Y. B. Geshkenbein, A. I. Larkin, and V. M. Vinokur, “Vortices in high-temperature superconductors,”Rev. Mod. Phys.66, 1125 (1994).Voronova N. S. Voronova and Yu. E. Lozovik, “Excitons in cores of exciton-polariton vortices,” Phys. Rev. B86, 195305 (2012);N. S. Voronova, A. A. Elistratov, and Yu. E. Lozovik, “Detuning-Controlled Internal Oscillations in an Exciton-Polariton Condensate,” Phys. Rev. Lett.115, 186402 (2015) . Griffin A. Griffin, “Conserving and gapless approximations for an inhomogeneous Bose gas at finite temperatures,”Phys. Rev. B53, 9341 (1996). Abrikosov A. A. Abrikosov, L. P. Gorkov and I. E. Dzyaloshinski,Methods of Quantum Field Theory in Statistical Physics (Prentice-Hall, Englewood Cliffs. N.J., 1963).BLS O. L. Berman, Yu. E. Lozovik, and D. W. Snoke, “Theory of Bose-Einstein condensation and superfluidity of two-dimensional polaritons in an in-plane harmonic potential,” Phys. Rev. B77, 155317 (2008). BKZ O. L. Berman, R. Ya. Kezerashvili, and K. Ziegler, “Superfluidity and collective properties of excitonic polaritons in gapped graphene in a microcavity”, Phys. Rev. B86, 235404 (2012). Amo A. Amo, J. Lefrère, S. Pigeon, C. Adrados, C. Ciuti, I. Carusotto, R. Houdré, E. Giacobino, and A. Bramati, “Superfluidity of polaritons in semiconductor microcavities,” Nature Physics5, 805 (2009). Mullin J. P. Fernández and W. J. Mullin, “The Two-Dimensional Bose-Einstein Condensate,” J. Low. Temp. Phys.128, 233 (2002).
http://arxiv.org/abs/1706.09062v1
{ "authors": [ "Oleg L. Berman", "Roman Ya. Kezerashvili", "Yurii E. Lozovik" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170627220324", "title": "On Bose-Einstein condensation and superfluidity of trapped photons with coordinate-dependent mass and interactions" }
^1Theoretical Biology Laboratory, RIKEN, Wako 351-0198, Japan Biological sensory systems generally operate out of equilibrium, which often leads to their improved performance. Here, we study the sensitivity of ligand concentration for a general receptor model, which is generally in the non-equilibrium stationary state, in the framework of a stochastic diffusion equation. We derived a general formula of the maximum sensitivity. Specifically,the sensitivity is limited universally by the Berg-Purcell limit [Biophys. J ., 1977], regardless of whether the receptor is in an equilibrium or non-equilibrium state.Ligand-concentration sensitivity of a multi-state receptor Takashi Okada December 30, 2023 ========================================================== Signal detection in biological sensory systems operate with great accuracy.A major concern regarding biomolecular sensory systems is the fundamental limitation on sensitivity according to the laws of physics. The seminal work by Berg and Purcell <cit.> proved that the sensitivity of receptors detecting diffusing ligands is limited due to fluctuations in diffusional processes. Bialek and Setayeshgar <cit.> improved the argument of the Berg-Purcell (BP) limit more precisely by explicitly including ligand-dissociation/binding processes. Following their work and in conjunction with experimental progress, the physical limitations of sensitivity have attracted increased attention in the field of biophysics <cit.>. In this Letter, we study the sensitivity of ligand concentrations for completely general receptor dynamics. In previous studies <cit.>, the system of a receptorwas assumed to be in thermal equilibrium, and the essential theoretical tool used for the arguments was the fluctuation-dissipation theorem (FDT) <cit.>. However, biological systems are generally out of equilibrium, and many sensory systems utilize free energy dissipations to improve their performance <cit.>. Here, we do not assume thermal equilibrium and reexamine the physical limitation of sensitivity for general receptor dynamics, which generally admits a non-equilibrium steady state. By explicitly including all relevant noises in the dynamics, we derive a formula of receptor sensitivity for any single-receptor dynamics. Specifically, we find thatanynon-equilibrium receptor dynamics does not improve the sensitivity beyond the BP limit, which complements the results of the previous studies based on the FDT.We consider a receptor with multiple ligand-binding sites and label the receptor states as m=1,⋯,M and reactions (transitions among receptor states) as r=1, ⋯,R (Fig. <ref>). We assume that the receptor state jumps from m=α(r) to m=β(r) under the r-th reaction.We introduce the stoichiometric matrix, ν, which is an M× R matrix whose component is given byν_m,r=- δ_r,α(r) +δ_r,β(r). The (deterministic) dynamics of the coupled system of the receptor (at =0) and ligand molecules is described byc/t= D∇^2_3d c -δ^(3d)()d/dt∑_m l_mn_m,d n_m /d t = ∑_r ν_m,r k_r n_α(r), where n_m(t) is the fraction of the m-th receptor state (0≤ n_m(t)≤ 1), and k_r is the rate constant of the r-th reaction. k_r depends on the ligand concentration, c(=0,t), at the receptor site if r is a ligand-binding reaction (i.e., l_β(r) -l_α(r) >0). δ^(3d)() represents the three-dimensional Dirac delta function. Suppose that the system is in a steady state specified by c(,t)=c̅ and n_m=n̅_m. n̅_m is determined explicitly as a function of rate constants as n̅_m({k̅_r}) by solving (<ref>), where the ligand-concentration dependence enters implicitly through k̅_r =k_r|_c=c̅. By linearizing the system around the steady state and including stochastic fluctuations <cit.>, we obtain the following Langevin equations:c̣/t= D∇^2_3d c̣ -δ^(3d)()d/dt∑_m l_mṇ_m -∇_3d ·𝐉,d ṇ_m /d t = ∑_r ν_m,r (k̅_r ṇ_α(r) +k̅'_r n̅_α(r) c̣ (=0,t)) +∑_r ν_m,r ξ_r . Here, k̅'_r ≡ k_r(c)/ c|_c=c̅ is nonzero only when the r-th reaction is a ligand-binding reaction. ξ_r represents the noise associated with the r-th reaction, satisfying⟨ξ_r(t) ξ_r'(t') ⟩=δ_r,r' k̅_r n̅_α(r) δ(t-t'), and 𝐉 (t,) = (J_x,J_y,J_z) isthe diffusional noise, satisfying⟨J_i (t,) J_j (t',') ⟩= 2D c̅ δ_i,jδ(t-t') δ^(3)(-'). The term -∇· J in (<ref>) and (<ref>) can be derived byregarding the diffusional process as a special type of “reaction”, where a molecule at a site (in the three dimensional space) is “produced” from one located at a neighboring site, and by using van Kampen's size expansion<cit.> (see also <cit.>).By applying the Fourier transform to Eqns. (<ref>) and (<ref>), we obtain - iω∑_m'( δ_m,m'-l_m' τ_c)ṇ_m'- ∑_r ν_m,rk̅_r ṇ_α(r)= ∑_r ν_m,r[ k̅'_r n̅_α(r)𝒥+ ξ_r], whereτ_c (ω)≡1/c̅ ∫d^3 k /(2 π)^3 1/- i ω+ D 𝐤^2≈Λ/2π^2 D c̅ ,𝒥(ω)≡∫d^3 k /(2 π)^3 - i𝐤·𝐉/- i ω+ D 𝐤^2 . In (<ref>), we have evaluated the integral at low frequency (ω≪ D Λ^2) by introducing a UV cutoff, Λ, corresponding to the inverse of the receptor size as in <cit.>. τ_c represents the time-scale associated with ligand molecules diffusing around the receptor.𝒥 represents the effective diffusional noise “felt” by the receptor, satisfying when ω≪ D Λ^2,⟨𝒥(ω)𝒥(ω') ⟩ ≈2π(2 τ_c c̅^2 ) (̣ω+ ω') , where we used (<ref>) and (<ref>). For ligand-concentration sensitivity, a relevant object is the spectral density, S_m m'(ω),defined as ⟨ṇ_m(ω) ṇ_m'(ω')⟩= 2π S_m m'(ω)(̣ω+ ω').Although we can straightforwardly compute this from (<ref>), the analytic computation is difficult for general receptor dynamics.For our purpose, we need only the long-term behavior (i.e., S_m m'(ω=0)), which can be determined indirectly, as shown below.In the low-frequency region, by dropping the terms proportional to ω,(<ref>) can be simplified as- ∑_r ν_m,rk̅_r ṇ_α(r)≈∑_r ν_m,r( k̅'_r n̅_α(r)𝒥+ ξ_r). In contrast to (<ref>), it isno longer possible to invert the left-hand side of (<ref>), because the coefficient matrix, ν_m,rk̅_r, on the left-hand side is rank-deficient due to the conservation∑_m d /dtn_m=0. One naive way to avoid this difficulty is to eliminate oneofthe M variablesby using ṇ_m = -∑_m' ≠ mṇ_m' and express (<ref>) in terms of the remaining M-1 variables. However, this asymmetric treatment of variables is inconvenient for the derivation of general formulas. A key step in our approach is to make use of the following relationships satisfied by ∂n̅_m /∂c̅ and∂n̅_m /∂k̅_r, - ∑_r ν_m,rk̅_r∂n̅_α(r) /∂c̅ = ∑_r ν_m,r k̅'_r n̅_α(r),- ∑_r ν_m,rk̅_r∂n̅_α(r) /∂k̅_r' =ν_m,r'n̅_α(r'), which can be easily obtained from (<ref>).The comparison of the coefficients in (<ref>) and(<ref>) implies that (<ref>) can be expressed as ṇ_m ≈∂n̅_m/c̅𝒥+ ∑_r1/n̅_α(r)∂n̅_m /∂k̅_r ξ̃_r . See the Appendix for a more rigorous derivation of (<ref>).The physical meaning of the step from (<ref>) to(<ref>) is that, the low-frequency fluctuations ṇ_m(ω≈ 0) can be determined from the dependences of the steady state on external parameters, c̅ and k̅_r.We call the derivatives ∂n̅_m/c̅ and∂n̅_m/k̅_rthesusceptibilities of the steady states to c̅ and k̅_r, respectively.Finally, from (<ref>), (<ref>), and (<ref>), we obtainS_m,m'(ω=0)= 2 τ_c c̅^2∂n̅_m /∂c̅∂n̅_m' /∂c̅ +S^reac_m,m', whereS^reac_m,m'≡∑_r k̅_r /n̅_α(r) ∂n̅_m /∂k̅_r ∂n̅_m' /∂k̅_rrepresents the contribution from the reaction noises, ξ_r.We note that as in <cit.>, S^reac is usuallycomputed by solvingthe continuous Lyapunov equation,A S^reac + S^reac A^T +Q=0,where A is a Jacobian matrix defined as (A)_m,m' = ∑_r ν_m,rk̅_r δ_α(r),m', and Q is given by (Q)_m,m' =∑_rν_m,rν_m',rk̅_r n̅_α(r). S^reac is usuallycomputed by solving (<ref>); however, for receptor dynamics (<ref>), S^reac can be obtained more easily and straightforwardly by solving n̅(k̅_r) from (<ref>) and using (<ref>).Similar to <cit.>, we assume that the cell “averages”the receptor states over a long-term period, T, and quantify the sensitivity of ligand concentration, Δ c, based on the signal-to-noise ratio (SNR). Therefore, we analyze the time-averaged fluctuationsδN_m ≡1/T ∫dt δn_m (t) , and the variances C_m,m' ≡⟨δN_mδN_m' ⟩= 1/TS_mm'(ω=0). Suppose that a subset of receptor states (active states), ℳ_a⊂{1, ⋯, M }, generates signals indicating the ligand concentration. The maximum SNR is then given by SNR =∑_m,m' ∈ℳ_a n̅_m/c̅(C^-1)_m,m' n̅_m'/c̅( Δc)^2. The maximum sensitivity (or resolution) can be estimated from the point at which the SNR equals one, which leads to(Δc/c̅)^2 = 1/T 1/c̅^21/∑_m,m' ∈ℳ_a ∂n̅_m /∂c̅ S^-1_m,m' ∂n̅_m' /∂c̅ . By plugging (<ref>) into (<ref>) with some matrix manipulation, the maximum sensitivity becomes(Δc/c̅)^2 = 2 τ_c / T+ 1/T c̅^21/∑_m,m' ∈ℳ_a∂n̅_m /∂c̅ (S^reac)^-1_m,m' ∂n̅_m' /∂c̅ . The first term is the same as the BP limit, and the receptor kinetics enters into the second term, which is positive-definite, because S^reac is a covariance matrix.Therefore, we have proven that the sensitivity is bounded by the BP limit, regardless of whether the receptor dynamics is in an equilibrium state or a non-equilibrium state.If, as is usually assumed, all ligand-binding rates are proportional to c̅, the second termin (<ref>) can be written as1/T1/∑_r,r' ∈l.b.∑_m,m' ∈ℳ_a k̅_r ∂n̅_m /∂k̅_r(S^reac)^-1_m,m' k̅_r' ∂n̅_m' /∂k̅_r', where the summation of reactions, r,r', runs over all ligand-binding reactions (l.b.). By utilizing atechnique developedin <cit.>, the denominator in (<ref>)can bedetermined from the state-transition network of the receptor dynamics and expressed as a rational function of rate constants, k̅_r (see Appendix for details). Such an explicit formula for arbitrary single-receptor dynamics does not exist in the literature. This enables us to evaluate the sensitivity systematically, even for receptors with complex dynamics.As for the second term in (<ref>), we can separate the information associated with ligand regulation (i.e., which reactions represent ligand-binding processes)by using ∂n̅_m/∂c̅=∑_r∂n̅_m/∂k̅_r∂k̅_r/∂c̅. In matrix notation, (<ref>) can be expressed as(Δc/c̅)^2 = 2 τ_c / T+ 1/T c̅^21/ k^T n^T ( nD n^T)^-1 n k, where k is a R dimensional vector with a ( k)_r ≡k̅_r/c̅,n is a |ℳ_a|× R matrix with ( n)_m,r≡n̅_m/k̅_r, and D is a R× R diagonal matrix with D = k̅_r/ n̅_α(r).n^T (n D n)^-1 n^T in the denominator is independent of k̅_r/c̅ and, therefore, determined without information regarding which reactions are ligand-binding reactions. The information associated with ligand regulations is encoded in k. As an illustration, we first examine a simple receptor model studied by Bialek and Setayeshgar in <cit.>. In this model, the receptor has twostates: a ligand-unbound (m=1) and-bound state (m=2). The receptor dynamics is described by d/dt[ n_1; n_2 ]=[ -k_1 k_-1;k_1 - k_-1 ][ n_1; n_2 ], with k_1 =k'_1 c.We assume that the cell “estimates” the ligand concentration from n_2 (i.e., ℳ_a={2}). Note that the resulting sensitivity is the same for ℳ_a={1}, because δ n_1 =-δ n_2. The maximum sensitivity(<ref>) becomes (Δc/c̅)^2 = 2 τ_c / T + 1/T2(k̅_1+k_-1)/ k̅_1 k_-1 , which agrees exactly with the result derived from the FDT in <cit.>.We note that, although the approach based on the FDT gives only the sum of the two terms in (<ref>), our method determines them separately, which makes clear the physical origins of these two terms: the contributionfrom the effective diffusional noise, 𝒥, andfrom the reaction noises, ξ_r, respectively.For more nontrivial and biologically relevant receptor dynamics, we consider a kinetic proofreading model <cit.>and compare this model with the reversible-reaction analogue (Fig. <ref>). The kinetic proofreading model was originally proposed to explain the ability of T-cell receptors to discriminate foreign antigens from self-antigens based on relatively small differences in ligand affinities. Similar to the kinetic proofreading model of DNA synthesis <cit.>, this model utilizes multiple irreversible steps, resulting in large differences in the production of active states depending on affinity.We remark that we here examine the sensitivity to a single ligand concentration. For a receptor model interacting with spurious ligands, see <cit.>.In the kinetic proofreading model, the bare receptor binds with a ligand molecule (with rate k_1 =k' c), and the ligand-bound state is then phosphorylated up to M-2 times (with rate k_p for each modification). The phosphorylated states revert to the unbound state with transition rate k_-1. By contrast, the reversible model consists of a ligand-binding reaction (with rate k_1 =k' c), M-2 forward reactions (with rate k_p), and M-1 backward reactions (with rate k_-1). We assume that only the final state is active and sends signals indicating ligand concentrations (i.e., ℳ_a ={ M}). Introducing the dimensionless parameters κ_1,κ_-1 as k_1 = κ_1 k_p, k_-1 = κ_-1 k_p, we can express the maximum sensitivity, (<ref>), in the following form: (Δc/c̅)^2= 2 τ_c / T+F_M (κ_1,κ_-1) /k_p T, where F_M is a dimensionless factor that depends on κ_1,κ_-1 (see the Appendix for the explicit expression of F_M). Before presenting the numerical results, we estimate the two terms in (<ref>) for acceptably accurate sensing.Thus far, we have considered a single receptor. When a cell has many independent receptors, the sensing accuracy of the entire cell is estimated by dividing (<ref>) by the total number ofreceptors expressed on the cell surface, which we assume to be ∼ 10^4. We estimate τ_c=10^-1 - 10^3 sec, (we used D=10^-1 -10^1 μm^2/sec, a linear dimension of receptora ≡Λ/π∼ 10^-2μ m, andc̅ = 10^2 - 10^4 /μ m^3), and the rate constant k_p =10^-3-10^-1sec^-1 (see <cit.> for this estimate).Using these values, while the first term in (<ref>) is acceptably small for the integration times T∼10^3andsec, the second term can become 𝒪(1) only if F_M<10^6. Therefore, in the following, we focus on the receptor-dependent part in (<ref>), F_M.Fig. <ref> shows the numerical results of F_M=8(κ_1,κ_-1) in the two models.In the region of κ_-1>1 (the upper-half region of Fig. <ref>) corresponding to rapid dissociation, the sensitivities in both models behave in a qualitatively similar way:F_M is large, except for κ_-1∼ 1, and, as κ_-1 increases, F_M becomes larger (or the sensitivity becomes worse)rapidly. By contrast, in the region of κ_-1<1, corr esponding to slow dissociation, the behaviors differ qualitatively between the two models. While F_M is large in the reversible models,F_M does not depend significantly upon κ_-1 and remains at a lower level in the kinetic proofreading model. Therefore, when κ_-1< 1 inthe kinetic proofreading model, an accurate sensing is possible over a wide range of κ_1 or, equivalently, ligand-concentration, because κ_1 =k' c̅/k_p.Next, we examine the dependence of F_M on the length of the reaction chains,M (see Fig. <ref> (Left)). For simplicity of analysis, we set κ_1=1. From the analytical expression of F_M in the Appendix, we can show that in both models, F_M asymptotically approaches F_M∼ 2 κ_-1^ M-2 when κ_-1≫1, deteriorating the sensitivityexponentially as M becomes large.However, when κ_-1≪ 1 and while F_M∼ 2/κ_-1^M-3 is in the reversible model, which is again exponential in M, F_M∼ M(M-1)/κ_-1 in the kinetic proofreading model, which depends on M only algebraically. Therefore, when κ_-1<1 and M is large, the sensitivity is much higher in the kinetic proofreading model, compared with the reversible model. Note that in either model, for fixed κ_-1, the sensitivity declines monotonically as M increases. From where does the discrepancy in performance between the two models originate? The sensitivityis determined form the ratio between the (squared) susceptibility, ( k̅_1 n̅_M /k̅_1)^2, and the fluctuation, S^reac_M,M (see (<ref>)). As shown in Fig. <ref> (right), the value of S^reac_M,M does not differ significantly between the two models. Therefore, the higher accuracy in the kinetic proofreading model essentially derives from its higher susceptibility, which can be understood as follows:In the reversible model, n̅_i/n̅_i-1=1/κ_-1 for i=3,…,M. Therefore, when κ_-1<1, the dependence of n̅_M onk̅_1 diminishes along the long reaction chain, because a large factor, 1/κ_-1, is multiplied in each step toward the active state. By contrast, in the kinetic proofreading model, n̅_i/n̅_i-1=1/1+κ_-1 for i=3,…,M-1, which is not large when κ_-1<1. Therefore, the dependence on k̅_1 is maintained along the reaction chain.We note that, in the study of T-cell receptors in <cit.>, it is the susceptibility to the dissociation constant, n̅_M/k_-1, thatleads to T-cell receptor selectivity. However, what we have discussed here is the susceptibility to ligand concentration, n̅_M/k̅_1,which is relevant for the sensitivity to ligand concentration. We provide quantitative analysis for the susceptibility, n̅/c̅.In the kinetic proofreading model, we have n̅_M = k_1 /(k_1+k_m)(1+κ_-1)^M-2. Therefore, when κ_-1=k_m/k_p>1, n̅_M is highly suppressed for large M, as is n̅_M/c̅ appearing in the denominator of (<ref>). For κ_-1 <1, there is no such suppression. In the reversible model,n̅_M = k_1/k_p κ_-1^M-1+k_1 ∑_i=0^M-2 κ_-1^i. This approaches a ligand-independent constant when κ_-1<1 and becomes small for κ_-1>1. Therefore, the derivative n̅_M/c̅ issmall, except for κ_-1∼ 1. In fact, in the case of k_1 = k_p, n̅_M/c̅ ≃1/∑_i=-(M-1)/2^(M-1)/2 κ_-1^i, which is symmetric under κ_-1→κ_-1^-1 and tiny for either κ_-1>1 and κ_-1<1 when M is large. In summary, for precise sensing,the receptor does not allow many intermediate modification steps in the broad range of κ_-1 in the reversible model. However, in the kinetic proofreading model, precise sensing is compatible with manyinternal states, as long as κ_-1<1. (Δc/c̅)^2= 2 τ_c / T+2(k_m+k_p)(∑_i=0^M-2 [ M; i ]k_m^M-2-ik_p^i)/k_m k_p^M-1T =2 τ_c / T+ 2(1+κ_-1) {(1+κ_-1)^M- M κ_-1 -1 }/k_p T κ_-1^3for the proofreading model, and (Δc/c̅)^2= 2 τ_c / T+2(∑_i=1^M k_m^M-i k_p^i-1)(∑_i=0^2M-4 b_ik_p^2M-4-ik_m^i)/k_m^2M-3 k_p^M-1T= 2 τ_c / T+2 (∑_i=0^M-1κ_-1^i )(∑_i=0^2M-4b_i κ_-1^i )/k_p Tκ_-1^M-3where the coefficients b_i are b_i = 1/2(i+1)(i+2) for i≤ M-2 and b_i =b_2M-4-i for i> M-2. In this Letter, we have derived a general formula for sensitivity, (<ref>), by explicitly accounting for diffusional and reaction noises and utilizing a similarmethod developed in <cit.>. The sensitivity formula (<ref>) consists of the BP limit and the termdetermined from the network topology of receptor dynamics. Our result is novel in that the assumption of thermal equilibrium is not required, and the formula is applicable to any instance of receptor dynamics.The framework of stochastic diffusion equations can serve as the basis for further research into more complex, realisticligand-receptor dynamics investigations. For example, a potential generalization is the case where, in addition to the ligand the receptor estimates its concentration, the receptoris regulated by other (freely diffusing) ligand species. In this case, as shown in Appendix, S^reac in(<ref>)is replaced by S^reac_m,m'→S^reac_m,m'+∑_i 2τ_i c̅_i^2 (n̅_m /c̅_i )^2,where i labels other ligand species with concentration c̅_i anddiffusion constant D_i, and τ_i≡Λ/2π^2 D_ic̅_i. We can also investigate reactingligands by replacing (<ref>) by reaction-diffusion equations. Another biologically relevant and theoreticallychallenging extension involves dynamically interacting receptors, for example, through ligand-regulated oligomerizations, as in the epidermal growth factor (EGF) receptors <cit.>. We hope to report progress in these directions in the near future. This work was partially supported by the CREST, Japan Science and Technology Agency. We also express our appreciation to Michio Hitoshima, Atsushi Mochizuki, Alan.D. Rendall, and Yasushi Sako for their inspiring discussions related to this work.*99BP H. C. Berg, and E. M. Purcell, Biophysical journal 20.2 (1977): 193-219. WS1 W. Bialek, and S. Setayeshgar, Proceedings of the National Academy of Sciences of the United States of America 102.29 (2005): 10040-10045. Kubo R. Kubo,Reports on Progress in Physics 29.1 (1966): 255.WS2 W. Bialek, and S. Setayeshgar, physical Review Letters 100.25 (2008): 258101.Kaizu K. Kaizu et al., Biophysical journal 106.4 (2014): 976-985.Wingreen1 R.G. Endres, and N. S. Wingreen, Proceedings of the National Academy of Sciences 105.41 (2008): 15749-15754. Wingreen2 R.G. Endres, and N. S. Wingreen, Physical Review Letters 103.15 (2009): 158101. Wingreen3 T. Mora, and N. S. Wingreen,Physical Review Letters 104.24 (2010): 248101.Wingreen6 Skoge, Monica, Yigal Meir, and Ned S. Wingreen, Physical review letters 107.17 (2011): 178101.Wingreen4 V Sourjik, and NS Wingreen, Current opinion in cell biology 24.2 (2012): 262-268.Wingreen5 M. Skoge, et al. , Physical Review Letters 110.24 (2013): 248102.Endres1 G. Aquino, and R. G. Endres, Physical Review E 81.2 (2010): 021909.Endres2 G. Aquino, and R. G. Endres,Physical Review E 82.4 (2010): 041902.Mora T. Mora,Physical Review Letters 115.3 (2015): 038102.Levine1 W. J. Rappel, and H. Levine,Proceedings of the National Academy of Sciences 105.49 (2008): 19270-19275.Levine2 W. J. Rappel, and H. Levine, Physical Review Letters 100.22 (2008): 228101.Levine3 B. Hu, W. Chen, W. J. Rappel, and H. Levine,Physical Review Letters 105.4 (2010): 048104.Govern C. C. Govern, and P. R. ten Wolde.Physical review letters 113.25 (2014): 258102. Lang A. H. Lang, C.K. Fisher, and T. Mora, Physical review letters 113.14 (2014): 148103.Fancher S. Fancher, and A. Mugler. Physical Review Letters 118.7 (2017): 078101. HF J. J. Hopfield,Proceedings of the National Academy of Sciences 71.10 (1974): 4135-4139.VK N.G. van Kampen,Canadian journal of physics 39.4 (1961): 551-567.ZOS De Zarate, Jose M. Ortiz, and Jan V. Sengers, Hydrodynamic fluctuations in fluids and fluid mixtures. Elsevier Science, Amsterdam Netherlands, 2006.Gardiner Gardiner, Crispin W. Stochastic methods. Springer-Verlag, Berlin-Heidelberg-New York-Tokyo, 1985. Mochizuki_main A. Mochizuki, and B.Fiedler,Journal of theoretical biology, 367 (2015), 189-202.monomolecular_main B. Fiedler, and A. Mochizuki, Mathematical methods in the applied sciences, 38 (2015): 3381-3600.OM_main T. Okada, and A. Mochizuki, Physical Review Letters 117.4 (2016): 048101. Mc T. W.Mckeithan,Proceedings of the national academy of sciences 92.11 (1995): 5042-5046.para1 J. D. Stone, A. S. Chervin, andD. M. Kranz, Immunology 126.2 (2009): 165-176.para2M. Hsieh et al.,BMC systems biology 4.1 (2010): 57.para3 H. Shankaran,H. S. Wiley, and H. Resat, Biophysical journal 90.11 (2006): 3993-4009.HH C. H. Heldin, Cell 80.2 (1995): 213-223. Supplementary Material Ligand-concentration sensitivity ofa multi-state receptor Takashi Okada Theoretical Biology Laboratory, RIKEN, Wako 351-0198, Japan§ DERIVATION OF EQ. (<REF>)In the main text, Eq. (<ref>) is obtained by comparing the coefficients of (<ref>)and(<ref>). Here, we derive (<ref>) more rigorously by explicitly calculatingthe receptor susceptibility _r n̅_m ≡n̅_m/ k̅_r.We first determine the receptor susceptibility _r n̅_m ≡n̅_m/ k̅_rfrom (<ref>). As we mentioned in the main text,the coefficient matrix in the right-hand side of (<ref>) is rank-deficient, and we cannot invert this matrix straightforwardly. A systematic way to solve this problem is to combine the receptor susceptibility with the flux susceptibility, as studied in <cit.>. We write the r-th transition rate at steady state as f_r ({k̅_r })≡k̅_r n̅_α(r), where {k̅_r } denotes the set of all reaction rate constants. Note that f_r depends not only on k̅_rand also on k̅_r' with r'≠ r through n̅_α(r). The flux susceptibility is given by f_r/k̅_r'. We take a basis of the right-kernel spaceof the stoichiometry matrix ν in (<ref>) as { V^1,…,V^K }, where K is the dimension of the null space.By definition, the vector f = (f_1,…,f_R) is the element of the null space, i.e. f = ∑_κμ_κV^κ with coefficients μ_κ({k̅_r}). Therefore, the flux susceptibility can written f_r/k̅_r' = ∑_κ=1^K _r' μ_κV^κ_r,where _r'μ_κ≡μ_κ/k̅_r' determined below, andV^κ_r is the r-th component of V^κ, i.e. ( V^κ)_r=V^κ_r. By using the definition of f_r ≡k̅_r n̅_α(r) and rearranging the terms in(<ref>), we obtain- k̅_r _r' n̅_α(r)+∑_κ=1^K_r' μ_κV^κ_r = δ_r,r' n̅_α(r). In addition, we have∑_m=1^M _rn̅_m=0 because the total receptor number is fixed to be unity.(<ref>) and (<ref>) can be summarized in a single matrix equation as A( [ ∂_1 n̅… ∂_R n̅;∂_1 μ…∂_R μ;])[ ; ↕ _M; ↕ _K; ;] = ( [ e_1 … e_R; 0 … 0 ])[ ; ↕ _R; ↕ _1; ;].Here, we indicate the structure of block matrices by the horizontal lines, and the sizes by the arrows. ∂_r n̅is the M-dimensional column vector, ∂_r n̅≡ (_r n̅_1,…, _r n̅_M)^T, and ∂_rμ is the K-dimensional column vector, ∂_r μ≡ (_r μ_1,…, _r μ_K)^T,e_r is the R-dimensional column vectorwhose components are given by( e_r)_r' = δ_r,r'n̅_α(r). Finally, A is the matrix defined by A≡[ ;_R↕; ; _1 ↕; ;]M []K[]( [; V^ 1 …V^ K; ; 1…10…0;]),where k_R× M is the R× M matrix defined as ( k_R× M)_r,m=k̅_r δ_α(r),m. Note that A is shown to be square, that is, R+1 = M+K. The whole set of receptor susceptibility n̅_m/k̅_r'is obtainedfrom (<ref>) all at once by inverting the matrix A; _r n̅_m = (A^-1)_m,r n̅_α(r) Although we do not use it, the flux sensitivity is also determined by substituting into (<ref>) the coefficients_r'μ_κ obtained from (<ref>). We emphasizethat the matrix A is determined from the deterministic receptor dynamics (<ref>), and so are the receptor susceptibilities.The same method can be applied to (<ref>). Since the matrix ν is multiplied in the both sides in (<ref>), we can express (<ref>) as- k̅_r ṇ_α(r)+∑_κ=1^K δμ_κ V^κ_r=k̅'_r n̅_α(r)𝒥+ ξ_rwhere δμ_κ(ω) are the coefficients of the null vectors determined below.In addition, since ∑_m=1^M ṇ_m(t)=0, we also have ∑_m=1^M ṇ_m =0. As in (<ref>), (<ref>) and (<ref>) can be collected into the single equation; A( [ δn; δμ;]) = ( [ e; 0 ]).where δ n=(δ n_1,…, δ n_M)^T, δμ=(δμ_1,…, δμ_K)^T, and e is the R-dimensional vector with( e)_r =k̅'_r n̅_α(r)𝒥+ ξ_r. From (<ref>) and (<ref>), we obtain δn_m=∑_r=1^R(A^-1)_m,r (e)_r= ∑_r=1^R _r n̅_m/n̅_α(r) ( k̅'_r n̅_α(r)𝒥+ ξ_r)= ∑_r=1^R ( k̅'_r _r n̅_m 𝒥 +_r n̅_m/n̅_α(r) ξ_r ) =n̅_m/c̅ 𝒥+ ∑_r=1^R _r n̅_m/n̅_α(r) ξ_r.§CALCULATING SENSITIVITY FROM MATRIX ALGEBRAAs can be seen from (<ref>) and (<ref>),n̅_m andthe derivatives n̅_m/k̅_r are needfor the calculation of the ligand sensitivity. However, fortunately, we do not need any differential computations. As we prove below,all what we need to do is just computing A^-1,and then the solution of the steady state is given byn̅_m = A^-1_m,R+1, and,from (<ref>), the susceptibility becomesn̅_m/k̅_r= A^-1_m,r A^-1_α(r),R+1. Before we prove (<ref>), we state the formula of the sensitivityby expressing (<ref>) in terms of A. By using the above two equations,(<ref>) is expressedas 1/T (∑_r,r' ∈l.b.∑_m,m' ∈ℳ_a k̅_rA^-1_m,rA^-1_α(r),R+1 (S^reac)^-1_m,m' k̅_r'A^-1_m',r'A^-1_α(r'),R+1 )^-1,where (S^reac)^-1 is the inverse matrix of (<ref>), which is expressed as S^reac_m,m' =∑_r,r'=1^RA^-1_m,rk̅_rA^-1_α(r),R+1 A^-1_m',r',where m,m'∈ℳ_a.In this way, the sensitivity can be computed by constricting A fromthe receptor dynamics (<ref>) and computing the inverse A^-1. (<ref>) can be proved as follows. The steady state n̅_m is determined from the conditions ∑_r ν_m,r k̅_r n̅_α(r) =0∑_m n̅_m = 1 As in (<ref>), by expressing the first conditionin terms of the basis of the kernel space of ν as k̅_r n̅_α(r) =∑_κ=1^K μ_κV^κ_r, The two conditions(<ref>) can be summarized asA( [ n̅;μ;])[ ; ↕ _M; ↕ _K; ;] = ( [ 0; 1 ]) [ ; ↕ _R; ↕ _1; ;],where n ≡ (n̅_1,…,n̅_M),μ≡ (μ_1, …, μ_K), and 0 is the R dimensional vector with zeros. This proves n̅_m = A^-1_m,R+1. § ILLUSTRATION OF SENSITIVITY COMPUTATIONSWe first illustrate the computational method in the previous section for the simplest model (<ref>). The stoichiometry matrix is given byν= [ -11;1 -1 ], which has a kernel vector, V_1=(1,1)^T. The normalization of V_1 is irrelevant. A is given by A=( [ - k̅_101;0 - k_-11;110;] ). The inverse becomesA^-1=( [LG -1/k_-1+k̅_1LG1/k̅_1+k_-1LLGk_-1/k_-1+k̅_1;LG1/k̅_1+k_-1LG- 1/k_-1+k̅_1LLGk̅_1/k̅_1+k_-1;-k_-1/k̅_1+k_-1 - k̅_1/k_-1+k̅_1 -k̅_1 k_-1/k̅_1+k_-1;] ). Note that theupper-right block (colored in yellow) represents the steady state fraction (n̅_1,n̅_2), and the susceptibility n̅_m /k̅_r canbe obtained from the upper-left block (colored in green) by using (<ref>). We assume that the active state is given by the second state, i.e. ℳ_a={2}.Then, we obtain, from (<ref>),S^reac_2,2 = 2 k̅_1 k_-1/(k̅_1 + k_-1)^3. By noting that the first reaction is a ligand-binding reaction and α(1)=1, the sensitivity(<ref>)is computed asfollows: 1/T S^reac_2,2/ (k̅_1 A^-1_2,1A^-1_1,3)^2 = 2(k̅_1 +k_-1)/k̅_1 k_-1T. This is the second term of (<ref>) in the main text. Next, we illustrate the methodfor the kinetic proofreading model withM=4 and R=6.In this case, the matrix ν is given by ν= ( [ -100111;1 -10 -100;01 -10 -10;00100 -1;] ). Note that the six columns represent a ligand-binding reaction, twophosphorylation reactions, three dissociation reactions, respectively.The matrix A becomes A=( [ - k̅_1000 -1 -1 -1;0- k_p00 -1 -10;00- k_p0 -100;0 - k_-10000 -1;00 - k_-100 -10;000 - k_-1 -100;1111000;] ). By setting k̅_1 =k_p (i.e. κ_1=1) and k_-1=κ_-1 k_p, the inverse becomes A^-1 = ( [ LG-1/ k_p(1+α)LG0 LG 0 LG 1/ k_p(1+α) LG 1/ k_p(1+α) LG 1/ k_p(1+α) LLGα/α+1; LGα/ k_p (α+1)^2 LG-1/ k_p(1+α) LG 0LG-α/ k_p (α+1)^2LG 1/ k_p (α+1)^2LG 1/ k_p (α+1)^2 LLGα/(α+1)^2;LG α/ k_p (α+1)^3 LGα/ k_p (α+1)^2LG -1/ k_p(1+α)LG-α/ k_p (α+1)^3LG -α(α+2)/ k_p (α+1)^3 LG1/ k_p (α+1)^3 LLGα/(α+1)^3; LG1/ k_p (α+1)^3 LG1/ k_p (α+1)^2 LG 1/ k_p(1+α)LG-1/ k_p (α+1)^3LG-α+2/ k_p (α+1)^3 LG-α(α+3)+3/ k_p (α+1)^3 LLG1/(α+1)^3; -α/(α+1)^3 -α/(α+1)^2 -α/α+1α/(α+1)^3 α(α+2)/(α+1)^3 -1/(α+1)^3- k_p α/(α+1)^3; -α^2/(α+1)^3 -α^2/(α+1)^2α/α+1α^2/(α+1)^3-α(α+3)+1/(α+1)^3 -α/(α+1)^3- k_p α^2/(α+1)^3; -α^2/(α+1)^2α/α+10 -2 α-1/(α+1)^2 -α/(α+1)^2 -α/(α+1)^2- k_p α^2/(α+1)^2;] ).Note that theupper-right block (colored in yellow) represents the steady state fraction (n̅_1,n̅_2, n̅_3, n̅_4), and the upper-left block (colored in green) is proportional to the susceptibility n̅_m /k̅_r. Then, assuming that only the fourth state is active and generates signals, S^reac_4,4 given by (<ref>), becomes S^reac_4,4=2κ_-1(κ_-1^2 + 4κ_-1 +6 )/(κ_-1 +1)^7 k_p,and, from(<ref>), thesensitivity associated with the receptor kinetics, which is the second term of (<ref>), becomes becomesF_M=4/k_p T=1/k_p T2(1+κ_-1)(κ_-1^2 + 4κ_-1 +6 )/ κ_-1.For general M, we obtain the following results:In the proofreading model, F_M=2(1+κ_-1) {(1+κ_-1)^M- M κ_-1 -1 }/ κ_-1^3, and, in the reversible model,F_M=2 (∑_i=0^M-1κ_-1^i )(∑_i=0^2M-4b_i κ_-1^i )/ κ_-1^M-3.The coefficients b_i are given by b_i = 1/2(i+1)(i+2) for i≤ M-2 and b_i =b_2M-4-i for i> M-2. Note that, for M=2, both of (<ref>) and (<ref>) reproduce(<ref>). § GENERALIZATION TO RECEPTOR BINDING WITH MULTIPLE (FREELY DIFFUSING) LIGAND SPECIESWe can apply the method in the main text into the case where a single receptor binds with multiple ligand species. In this case, the deterministic dynamics is described byc_i/t= D_i ∇^2_3d c_i -δ^(3d)()d/dt∑_m l^(i)_mn_md n_m /d t = ∑_r ν_m,r k_r n_α(r). Here, c_i(, t) is the concentration of the i-th (i=1,…,I) ligand molecule, and l^(i)_mthe number of the i-th ligand bound inthe m-th state receptor. D_i is the diffusion constant of the i-th ligand.t The stochastic dynamics at the linear level is described byc̣_i/t= D_i ∇^2_3dc̣_i -δ^(3d)()d/dt∑_m l^(i)_mṇ_m - ∇_3d·J_i d ṇ_m /d t = ∑_r ν_m,r (k̅_r ṇ_α(r)+ ∑_i k̅_r/c̅_i n̅_α(r)c̣_i(x=0,t))+ ∑_r ν_m,rξ_r. Here, J_i is the diffusional noise of the i-th ligand, satisfying with⟨J_i,a(x,t) J_j,b (x',t') ⟩= δ_i,jδ_a,b D_i c̅_i δ(t-t')δ(x-x'), with a,b = x,y,z. ξ_r is the noise associated with the r-th reaction(transition) of the receptor dynamics. Along the same line from (<ref>) to (<ref>),the spectral density S_m m'(ω) becomes S_m,m' (ω)=∑_i 2 τ_i c̅_i^2 n̅_m/c̅_in̅_m'/c̅_i+ S^reac_m,m', with τ_i ≡Λ/2π^2 D_i c̅_i. By substituting (<ref>) into (<ref>) after splitting the sum over the ligand label i into those for which the cell needs to estimate the concentration and the others, we obtain the result (<ref>) in the main text. *99Mochizuki A. Mochizuki, and B.Fiedler,Journal of theoretical biology, 367 (2015), 189-202.monomolecular B. Fiedler, and A. Mochizuki, Math. Meth. Appl. Sci. 38 (2015), 3381-3600.OM T. Okada and A. Mochizuki, Law of localization in chemical reaction networks, Physical Review Letters 117.4 (2016): 048101.
http://arxiv.org/abs/1706.08346v1
{ "authors": [ "Takashi Okada" ], "categories": [ "physics.bio-ph", "q-bio.MN", "q-bio.SC" ], "primary_category": "physics.bio-ph", "published": "20170626124324", "title": "Ligand-concentration sensitivity of a multi-state receptor" }
[email protected] Kavli Institute of Nanoscience, Delft University of Technology, P.O. Box 5046, 2600 GA Delft, NetherlandsConsiglio Nazionale delle Ricerche CNR-SPIN, UOS L'Aquila, Sede Temporanea di Chieti, 66100 Chieti, ItalyKavli Institute of Nanoscience, Delft University of Technology, P.O. Box 5046, 2600 GA Delft, Netherlands Electron Microscopy for Materials Science (EMAT), University of Antwerp, 2020 Antwerp, BelgiumKavli Institute of Nanoscience, Delft University of Technology, P.O. Box 5046, 2600 GA Delft, NetherlandsLaboratoire de Physique des Solides, Bat 510, Université Paris-Sud, 91405 Orsay, FranceConsiglio Nazionale delle Ricerche CNR-SPIN, UOS L'Aquila, Sede Temporanea di Chieti, 66100 Chieti, ItalyKavli Institute of Nanoscience, Delft University of Technology, P.O. Box 5046, 2600 GA Delft, NetherlandsWe investigate the thickness-dependent electronic structure of ultrathin SrIrO3 and discover a transition from a semimetallic to a correlated insulating state below 4 unit cells. Low-temperature magnetoconductance measurements show that spin fluctuations in the semimetallic state are significantly enhanced while approaching the transition point. The electronic structure is further studied by scanning tunneling spectroscopy, showing that 4 unit cells SrIrO3 is on the verge of a gap opening. Our density functional theory calculations reproduce the critical thickness of the transition and show that the opening of a gap in ultrathin SrIrO3 is accompanied by antiferromagnetic order.Spin-orbit semimetal SrIrO3 in the two-dimensional limit A. D. Caviglia December 30, 2023 ======================================================== Recent advancements in oxide thin film technology have enabled the synthesis of complex materials at the atomic scale. Through interface and strain engineering it is possible to tailor the delicate balance between competing energy scales and control the ground state of quantum materials <cit.>.In the two-dimensional limit, the coordination of constituent ions at the interfaces is reduced, typically yielding a decrease of the electronic bandwidth W. At a critical thickness depending on the relative magnitude of W and the Coulomb repulsion U, a metal-insulator transition can occur <cit.>. This approach has been applied to study the dimensionality-driven metal-insulator transition (MIT) in 3d transition metal oxides such as SrVO3 and LaNiO3, where a transition from a bulk-like correlated metallic phase to a Mott or static ordered insulating phase occurs in the two-dimensional limit <cit.>.In this Letter, we consider the 5d oxide SrIrO3 which, in the three-dimensional limit, is a narrow-band semimetal bordering a Mott transition due to a combination of strong spin-orbit coupling (SOC) and electron correlations <cit.>. We find that an MIT occurs at a film thickness between 3 and 4 unit cells and study the evolution of the electronic structure across the transition by (magneto)transport and scanning tunneling spectroscopy (STS). The paramagnetic susceptibility is found to be strongly enhanced while approaching the transition point, which is indicative of the opening of a Mott gap and the concomitant enhancement of magnetic order <cit.>. Our results are supported by first-principles density functional theory (DFT) calculations, which reproduce the critical thickness of the transition and show that the insulating state in the two-dimensional limit is antiferromagnetically ordered. Our study highlights ultrathin SrIrO3 as a novel platform for engineering the interplay of magnetism and spin-orbit coupling at oxide interfaces.SrIrO3 (n = ∞) is the only (semi)metallic member of the Ruddlesden-Popper series of strontium iridates Sr_n+1Ir_nO_3n+1. On the other end of the series, two-dimensional Sr2IrO4 (n=1) is a Mott insulator with canted antiferromagnetic order. Despite the extended 5d orbitals, narrow, half-filled J_eff = 1/2 bands emerge due to the strong SOC (∼0.4 eV) and even a relatively small U∼0.5 eV is sufficient to induce a so-called spin-orbit Mott ground state <cit.>. In SrIrO3, the effective electronic correlations are smaller due to the three-dimensional corner-sharing octahedral network <cit.>, but the strong SOC still causes a significant reduction of the density of states (DOS) at the Fermi level. Together with octahedral rotations that reduce the crystal symmetry, this places the material at the border of a Mott transition and gives rise to an exotic semimetallic state <cit.>. To study changes in electronic structure between the two end members of the Ruddlesden-Popper series, previous studies have focused on SrTiO3/SrIrO3 superlattices <cit.>. In this system, the crossover from three-dimensional semimetal to two-dimensional insulator was investigated by reducing the number of SrIrO3 layers. However, it was recently shown that additional hopping channels between the Ir atoms are activated by the SrTiO3 between SrIrO3 layers, increasing the bandwidth and reducing the effective strength of correlations <cit.>. In the present work, we isolate the effect of dimensionality by studying SrIrO3 layers of different thickness, providing access to the intrinsic properties of SrIrO3 in the two-dimensional limit. A series of SrIrO3 films with thicknesses varying from 30 to 2 u.c. were grown by pulsed laser deposition (PLD) on TiO2-terminated SrTiO3(001) substrates. As described in previous work, we use a SrTiO3 cap layer to prevent degradation of the film in ambient conditions and enable lithographic processing <cit.>. Atomic scale characterization of the lattice structure was performed by Cs-corrected high angle annular dark field scanning transmission electron microscopy (HAADF-STEM). Hall bars were patterned by e-beam lithography, and the buried SrIrO3 layer was contacted by Ar etching and in-situ deposition of Pd/Au contacts, resulting in low-resistance Ohmic contacts. Transport measurements were performed in a He flow cryostat with a 10 T superconducting magnet and a base temperature of 1.5 K. Uncapped SrIrO3 films were transferred in an N2 atmosphere from the PLD chamber to the low-temperature scanning tunneling microsopy (STM) setup without exposure to ambient conditions. More details regarding the growth and sample characterization can be found in the supplementary material <cit.> and in Ref. <cit.>. First-principles DFT calculations were performed within the Generalized Gradient Approximation using the plane wave VASP <cit.> package and PBEsol for the exchange-correlation functional <cit.> with SOC. The Hubbard U effects on the Ir and Ti sites were included. To find a unique value of the Coulomb repulsion for the Ir 5d states, U was tuned in order to reproduce the experimental semimetallic behaviour at 4 u.c, while we used J_H = 0.15U. Using this approach we obtained U = 1.50 eV, which is in good agreement with the typical values used for weakly correlated Ir compounds <cit.>.Figure <ref>(a) shows an optical image of a Hall bar used for transport measurements. The image is taken prior to the removal of the resist mask used to protect the film during the Ar etching step. A HAADF-STEM image of a 10 u.c. SrIrO3 film is shown in panel (b), where atomically sharp interfaces with the substrate and the cap layer are visible. The sheet resistance R versus temperature T of SrIrO3 films with thicknesses t from 30 to 2 unit cells is shown in Fig. <ref>(c). As the film thickness is reduced, R continuously increases and two different regimes can be identified. For t ≥ 4 u.c., the resistance values are below 25 kΩ and the films show metallic behavior. Thinner films (t ≤ 3 u.c.) have a resistance above 25 kΩ and display insulating behavior. Hence, it is apparent that SrIrO3 films undergo a sharp metal-insulator transition between 4 and 3 u.c., occurring when the sheet resistance crosses h/e^2 = 25 kΩ. This is in good agreement with photoemission measurements, which show the disappearance of the Fermi cutoff below 4 u.c. and the opening of a charge gap <cit.>. In two dimensions, the resistance value h/e^2 corresponds to the limit k_Fl_e∼1, where k_F is the Fermi wavevector and l_e is the mean free path, marking the transition from weak to strong localization <cit.>.In the (semi)metallic regime, the films show bad metallic behavior in the high temperature range, consistent with previous reports <cit.>. The resistance first decreases linearly with temperature until T_min, below which an upturn is observed. In addition, the residual resistance ratio defined as R(300 K)/R(T_min) is rather low for all thicknesses (∼1.2). Such anomalous metallic behavior is often observed in materials that are bordering a Mott transition. Upon decreasing the film thickness, the temperature of the resistance minimum T_min increases from 10 K (30 u.c.) to 60 K (4 u.c.) [Figure <ref>(c), inset]. By rescaling the curves in panel (c) for the film thickness, we obtain the resistivity ρ as function of temperature as shown in Fig. <ref>(d). In the semimetallic regime, the curves collapse and display similar behavior apart from the increasingly strong upturn at low temperature. Interestingly, the resistance upturn is accompanied by an increase of the Hall coefficient R_H, as shown in the supplementary material <cit.>. This is most likely related to the band structure as underscored by angle-resolved photoemission spectroscopy (ARPES) measurements, where multiple heavy hole and light electron bands were identified <cit.>. Since the top energy of several hole bands was measured to lie just below the Fermi level, these bands will be progressively depopulated with decreasing temperature, increasing R_H and the resistance. Transport in ultrathin (2 and 3 u.c.) films occurs in a strongly localized regime with a sheet resistance well in excess of h/e^2. For the 3 u.c. film, the conductivity σ can be well described by a variable range hopping (VRH) type of conduction. In this case, electrons hop between localized states and the conductance is given by σ = Cexp[-(T_0/T)^α], where T_0 depends on the density of localized states and the spread of their wave functions <cit.>. VRH conductivity can be of either Mott or Efros-Shklovskii type, which for a 2D system translates into exponents α = 1/3 and 1/2, respectively <cit.>. The fit to the data yields an exponent α = 0.57, which is in good agreement with the latter, suggesting the existence of a Coulomb gap. On the other hand, the R(T) of the 2 u.c. film can be well fitted by an Arrhenius-type behavior where R∝exp(E_g/2k_BT), which yields an energy gap of approximately E_g = 95 meV.To probe changes in the electronic structure and spin relaxation while approaching the transition point, we perform magnetotransport measurements. Figure <ref>(a) shows the out-of-plane magnetoconductance Δσ in units of e^2/π h measured at 1.5 K for film thicknesses ranging from 30 to 4 unit cells. As shown in the supplementary material, the magnetoconductance is nearly isotropic <cit.>. In the limit of large thickness, the magnetoconductance is negative and quadratic and displays a cusp around B = 0 T as reported in other works <cit.>. However, a crossover from negative to positive values occurs as we approach the MIT. We attribute this behavior to weak (anti)localization, the interference of quantum coherent electronic waves undergoing diffusive motion (in the presence of spin-orbit interaction). In this picture, the magnetic field breaks time-reversal symmetry and destroys the phase coherence of closed paths, suppressing localization effects. To investigate this scenario, we fit the curves with the Maekawa-Fukuyama formula [red lines in Fig. <ref>(b)] in a diffusive regime that describes the change in the conductivity with magnetic field with negligible Zeeman splitting <cit.>, given by Δσ(B)/σ_0 = -ψ(1/2 + B_e/B) + 3/2ψ(1/2 + B_φ + B_so/B) - 1/2ψ(1/2 + B_φ/B) - ln(B_φ + B_so/B_e) - 1/2ln(B_φ + B_so/B_φ), where ψ is the digamma function, σ_0 = e^2/π h is the quantum of conductance and B_e, B_φ and B_so are the effective fields related to the elastic, inelastic and spin-orbit relaxation lengths, respectively. Since all the films have similar resistivity values, we fix B_e to 1.2 T, corresponding to an elastic length of approximately 11.7 nm and a carrier density in the order of 10^19 cm^-3. This value yields the best fits over the entire thickness range (see supplementary material <cit.>) and is consistent with a Drude contribution following our analysis of the semimetallic electronic structure <cit.>. For the 30, 15, and 6 u.c. films, a B^2 component was fitted at high fields and subtracted to account for the classical orbital magnetoconductance <cit.>. The scattering lengths l_i are related to the effective fields by B_i = ħ/4el_i^2, and their fitted values are shown in Fig. <ref>(c). The lengths are larger than the film thickness, indicating that a 2D model is appropriate. The extracted parameters show a crossover from l_φ>l_so for the thicker samples (30, 15 u.c.) to l_φ<l_so for the thinner ones (6, 5, 4 u.c.), capturing the crossover from negative (weak antilocalization) to positive (weak localization) magnetoconductance as the film thickness is reduced.A close look at the thickness dependence of l_φ reveals deviations from the expected behavior considering only electron-electron corrections to the weak localization expression (1/l_ε^2∼ Rlog k_Fl_e, where l_ε is the length associated with electron-electron corrections). To correctly describe the physics at play, one needs to include diffusive spin fluctuations which, when sufficiently large, can set the inelastic scattering length, leading to an effective inelastic scattering time given by <cit.> 1/τ_φ = 1/τ_ε + 2/31/τ_sf, where τ_φ is related to the energy relaxation time τ_ε and to the spin fluctuation time τ_sf (l_i^2 = Dτ_i, where D is the diffusion constant). Since 1/l_sf^2 is proportional to the paramagnetic susceptibility χ(t), we can qualitatively track the variation of χ by studying the thickness dependence of l_φ. Figure <ref>(d) shows the relative susceptibility χ(t)/χ(30 u.c.) as function of thickness. The increase of χ at low thicknesses is characteristic of a magnetic transition. We note that the transition from negative to positive magnetoconductance is set by the relative magnitude of l_ε and l_sf. Near the transition point, l_sf<l_ε, i.e., spin fluctuations are large, leading to a positive magnetoconductance due to weak localization. In the limit of large thickness, l_sf>l_ε, l_so. Here, both electron-electron interactions and weak antilocalization contribute to the negative magnetoconductance. Structural studies have shown that octahedral coupling at the SrTiO3/SrIrO3 interface suppresses the bulk octahedral rotations in the SrIrO3 film for t≤3 u.c., enhancing magnetic interactions <cit.>. Within this view, the increase of χ as the film thickness is reduced can be understood as an increased fractional contribution from the less distorted magnetic interfacial region. The film encapsulation could further enhance this effect since it presents two interfaces with the cubic SrTiO3.Further insights on the anomalous behavior in the semimetallic state and the electronic structure near the MIT can be obtained by measuring the DOS across the Fermi energy E_F by STS measurements. A topographic STM image [inset Fig. <ref>(a)] acquired on a 10 u.c. SrIrO3 film shows terraces and steps with height equal to one unit cell, confirming the layer-by-layer growth mode and showing that the surface is single-terminated. Figure <ref>(a) shows the differential conductance ( I/ V) spectra acquired at 4 K on three different samples with film thicknesses of 4, 6 and 10 unit cells. The spectra taken in the large energy window [Fig. <ref>(a)] show V-shaped behavior with a linear dependence of the DOS for both occupied (negative energies) and unoccupied (positive energies) states. As shown in Fig. <ref>(b), the minimum of the spectra is at zero energy (i.e., at E_F), and while the spectra taken on the 6 and 10 u.c. films exhibit finite DOS, the 4 u.c. sample shows zero DOS at E_F. Therefore, the evolution of the DOS at E_F reflects the approach of the MIT, where the 4 u.c. film is on the verge of a gap opening.V-shaped DOS has previously been observed in (1) systems with two-dimensional Dirac surface states such as germanene/Pt(111) and graphene/SiC <cit.> and (2) in the pseudogap phase of lightly-doped Mott insulators such as cuprates <cit.>. A Dirac cone is not expected in this system due to the breaking of n-glide symmetry by epitaxial constraint, as was shown previously for SrIrO3 grown on GdScO3 <cit.>. However, Sr2IrO4 exhibits similar V-shaped behavior when doped with La^3+, showing zero DOS at E_F <cit.> as observed for the 4 u.c. SrIrO3 film. The resemblance could stem from both SrIrO3 and doped Sr2IrO4 being in close proximity to a metal-insulator transition, although on opposite sides of the phase boundary. However, further investigation is required to fully address the exact nature of the V-shaped DOS of SrIrO3 thin films.To study the electronic and magnetic structure of SrIrO3 in the two-dimensional limit and gain additional information about the insulating state, we perform first principles calculations. We first consider how the properties of bulk SrIrO3 evolve as a function of the Coulomb repulsion U. At low U, the system shows a nonmagnetic metallic state topologically protected by time-reversal symmetry <cit.>. Upon increasing the value of U, a canted G-type antiferromagnetic (AFM) metallic state with a net in-plane magnetic moment emerges <cit.>. A further increase of U opens a gap, leading to a G-type AFM insulating state <cit.> like in SrIrO3/SrTiO3 superlattices <cit.>. Since both U and the breaking of TRS are required to open the gap, the SrIrO3 thin films can be regarded as insulators located in the intermediate region between a Slater-type and a Mott-type insulator. The same qualitative results were obtained in other Ir compounds <cit.>. When moving from bulk SrIrO3 to SrIrO3/SrTiO3 heterostructures, compressive strain, reduction of the bandwidth and an increase of the Coulomb repulsion have to be taken into account. Compressive strain (∼1%) favors the metallicity <cit.> because of the increased bandwidth <cit.>. The other two effects favor the insulating state <cit.> and are both needed to observe the semimetallic or insulating phase in SrIrO3 ultrathin films. We focused on the thickness range in the vicinity of the MIT and computed the band structure for the 4 and 3 u.c. films for U = 1.50 eV, which are shown together with the corresponding DOS in Fig. <ref>(a) and (b), respectively. The reduction of the bandwidth when going from 4 to 3 u.c. results in a localization of the carriers, and triggers a transition from a semimetallic to an AFM insulating state. Even for a single layer of SrIrO3 on SrTiO3 the nonmagnetic case is found to be metallic, and AFM ordering is required for the opening of a gap <cit.>. The electronic structure of the 4 u.c. film shows a gap-closing behaviour, consistent with STS. In the case of 3 u.c. the gap is 60 meV; its precise value is however crucially dependent on many effects such as octahedral distortions, magnetic order, strain, connectivity and Coulomb repulsion. Near the Fermi level, the DOS is dominated by 5d t_2g contribution as in bulk SrIrO3. Hence, by reducing the thickness, we approach a state closer to J_eff=1/2 as in Sr2IrO4. However, while the t_2gunoccupied bandwidth is comparable to Sr2IrO4, the occupied part shows a mixed J_eff = 1/2, 3/2 behavior rather than a pure J_eff = 1/2 picture. In conclusion, we have shown that the spin-orbit semimetal SrIrO3 can be driven into a correlated insulating state in the two-dimensional limit. At low-temperature, quantum corrections to the conductivity indicate significant changes in scattering mechanisms in the semimetallic regime near the transition point. The divergence of χ is indicative of the opening of a Mott gap and the concomitant enhancement of magnetic order, in agreement with previous reports of fluctuations in the spin, charge, and orbital degrees of freedom in systems that are approaching a Mott transition <cit.>. This is corroborated by the near-isotropy of the magnetoconductance, which points towards magnetic scattering in the semimetallic regime. Such isotropy is also observed in thicker films, indicating that there is already a fair amount of magnetic fluctuations in the limit of large thickness, which is understandable in view of the fact that SrIrO3 is bordering a Mott transition. It is also consistent with previous reports on a diverging magnetic susceptibility at low temperatures and the possibility of exchange enhanced paramagnetism <cit.>. The close proximity of SrIrO3 to a correlated insulating state is further corroborated by STS measurements, showing a V-shaped I/ V behavior similar to that of lightly-doped J_eff = 1/2 Mott insulator Sr2IrO4. In addition, the 4 u.c. film reflects the onset of the gap opening as it shows zero DOS at the E_F, being at the border of the MIT. Our DFT calculations reproduce the metal-insulator transition for U = 1.50 eV and show that antiferromagnetism develops concomitantly with the opening of a gap. This work was supported by The Netherlands Organisation for Scientific Research (NWO/OCW) as part of the Frontiers of Nanoscience program (NanoFront), by the Dutch Foundation for Fundamental Research on Matter (FOM). The research leading to these results has received funding from the European Research Council under the European Union's H2020 programme/ERC GrantAgreement n. [677458]. Support from the French National Research Agency (ANR), project LACUNES No. ANR-13-BS04-0006-01 is gratefully acknowledged. The authors thank R. Claessen, P. Schütz, D. Di Sante, G. Sangiovanni and A. Santander Syro for useful discussions. apsrev4-1
http://arxiv.org/abs/1706.08901v1
{ "authors": [ "D. J. Groenendijk", "C. Autieri", "J. Girovsky", "M. Carmen Martinez-Velarte", "N. Manca", "G. Mattoni", "A. M. R. V. L. Monteiro", "N. Gauquelin", "J. Verbeeck", "A. F. Otte", "M. Gabay", "S. Picozzi", "A. D. Caviglia" ], "categories": [ "cond-mat.str-el", "cond-mat.mes-hall" ], "primary_category": "cond-mat.str-el", "published": "20170627151054", "title": "Spin-orbit semimetal SrIrO$_3$ in the two-dimensional limit" }
=0.0in =-0.75in 6.85in 9.15in =10000 =10000→ [email protected] Energy Cyclotron Centre, HBNI, 1/AF Bidhan Nagar, Kolkata 700 064, IndiaWe study the system-size dependence of Knudsen number, a measure of degree of thermalization, for hadron resonance gas that follows the Lattice-QCD equation ofstate at zero chemical potential. A comparison between Knudsen numbers for the AuAu collisions at RHIC and the hadron gas of size similar to the size of high-multiplicitypp events at LHC, reassures the applicability of hydrodynamics in interpreting the features of particle production in high-multiplicity pp events. 13.85.Hd, 25.75-qThermalization in small system of hadron gas and high-multiplicity pp eventsPremomoy Ghosh December 30, 2023 ==============================================================================§ INTRODUCTIONThe Quark-Gluon Plasma (QGP), a thermalized partonic matter predicted in quantum chromodynamics (QCD), got the experimental endorsement <cit.> from the ultra-relativistic AuAu collisions at the RHIC, where the pp collisions served the role of the base-line in finding the signals for the collective medium.The experiments at the LHC have raised the significance of the study of multiparticle production in pp collisions by extracting long-range two-particle angularcorrelations and obtaining the characteristic flow parameters in high-multiplicity events of pp collisions at √(s) = 7 and 13 TeV <cit.>.In spite of the strong experimental signatures, the collectivity in pp collisions is disputed on the issue of thermalization in a small system of short lifetime that maybe formed in the pp events. At this stage, it is pertinent to assess the degree of thermalization of the system formed in high-multiplicity pp events and compare it withthat of the medium formed in relativistic heavy-ion collisions where the local thermodynamic equilibrium is undisputedly established through satisfactory descriptions ofdata by relativistic hydrodynamics <cit.>.The study of the bulk thermodynamic properties of the strongly interacting matter in the QCD framework has been made possible by the formulation of the Lattice QCD(LQCD) <cit.>. Recent LQCD simulations at finite temperature reveal existence of a de-confined partonic phase at high temperature and a confined hadronic phaseat low temperature. The hadronic phase of the QCD medium is successfully addressed also by the Hadron Resonance Gas (HRG) Model <cit.>, instatistical thermodynamic framework. An ideal HRG, formulated with discrete mass spectrum of identified hadrons and resonances, has been reinforced with the ab-initioconfirmation <cit.> from the LQCD. Also, as yet unmeasured higher-mass hadron states, provided by the exponentially growing continuum massspectrum, proposed <cit.> by R. Hagedorn, have important contribution to the equation of state (EoS) <cit.>below the critical temperature (T_c)for the QCD change of phase.In this article, to address the cardinal question on the degree of thermalization in small system of high-multiplicity pp events, we consider HRG with the Hagedorn massspectrum. The excluded volume effect <cit.> is implemented in the HRG by considering hard core radius of the constituents of the system. Finally, we studythe system-size dependence of the Knudsen number for finite-size hadron gas, complying with the LQCD EoS, to access the degree of thermalization in hadron gas thatmay be formed in the final stage of high-multiplicity pp events. § HADRON RESONANCE GAS MODELThe partition function of i^th particle in a grand canonical ensemble of ideal HRG can be written as <cit.>:ln Z_iid=±V g_i/2π^2 ∫_0^∞ p^2 dp ln{1±exp[-(E_i-μ_i)/T] }where E_i=√(p^2+m^2_i) and μ_i = B_iμ_B + S_iμ_s + Q_iμ_Q. The symbols carry their usual meaning. The (+) and (-) sign corresponds tofermions and bosons respectively. The pressure P(T), energy density ϵ (T) and the number density n(T) for ideal hadron resonance gas, atμ = 0, can be written as :P^id(T) =±∑_i g_iT/2π^2 ∫_0^∞ p^2 dp ln{1±exp(-E_i/T) }ϵ^id(T) = ∑_i g_i/2π^2 ∫_0^∞p^2 dp/exp(E_i/T) ± 1 E_in^id(T) = ∑_i g_i/2π^2 ∫_0^∞p^2 dp/exp(E_i/T) ± 1The Hagedorn mass spectrum is given by <cit.>:ρ(m) = C θ(m-M_0)/(m^2+m^2_0)^aexp(m/T_H)where T_H is the Hagedorn temperature that determines the slope of the exponential part of the mass spectrum. The effect of repulsive interactions at the short distances, particularly significant for density related observables, are incorporated in a thermodynamically consistent manner,through the Van der Waals Excluded Volume (EV) method <cit.>. The volume of the system is substituted with an effective volume obtained by excluding the sum ofvolume, v = 16π r^3/3 of the constituent hadrons of hard core radius, r. The inclusion of the Hagedorn states in the system of hadrons and resonances naturallyleads one to adopt the Boltzmann approximation. The thermodynamic variables for such a system with the excluded volume effect can be written as <cit.>:P^H_EV(T) =κ P^H(T) ϵ^H_EV(T) = κϵ^H(T)/1 + vκn^H(T)n^H_EV(T) = κn^H(T)/1 + vκn^H(T)where κ ( 1) is the excluded volume suppression factor, given by κ = exp(-vp^H / T) andP^H(T) =T/2π^2 ∫dm ∫_0^∞ p^2 dp exp (-√(m^2+p^2)/T)[∑_i g_iδ(m-m_i)+ρ(m)] ϵ^H(T) = 1/2π^2 ∫ dm ∫_0^∞p^2dp√(m^2+p^2)exp (-√(m^2+p^2)/T)[∑_i g_iδ(m-m_i)+ρ(m)]n^H(T) = 1/2π^2 ∫dm ∫_0^∞p^2dpexp (-√(m^2+p^2)/T)[∑_i g_iδ(m-m_i)+ρ(m)] § RESULTSTo implement the finite size effect in hadron resonance gas, we start with infinite (in thermodynamic sense) system-size of hadron gas, incorporated with the Hagedornstates and excluded volume effect, where the LQCD EoS is contrasted satisfactorily. We optimize the values of the corresponding parameters, from the ranges as suggestedin Ref. <cit.>, for the simultaneous consideration of Hagedorn states and the excluded volume effect. The finite size effect on volume and number of particles,considered infinite in the thermodynamic limit, can be implemented <cit.> by cutting off the low momentum regions in the integral over momentum space. Weintroduce the finite size effect <cit.>, using the lower limit of momentum, p_cutoff (MeV) = 197 π / R (fm), where R is the characteristic system-size. Weconsider the mass table of the PDG in Ref. <cit.> and compare our calculations with the LQCD results of Ref. <cit.>. The temperature dependence of pressureand energy-density of hadron gas for infinite as well as for a few representative finite sizes, R = 2.5, 3 and 5 fm, for different options, 1) Ideal HRG, 2) Ideal HRG +Hagedorn States and 3) Ideal HRG + Hagedorn States + EV effect are presented in Figures <ref> and  <ref>, respectively. It is clear from the figuresthat a system of hadrons, resonances and Hagedorn states up to the finite system-size of R = 2.5 fm (corresponding to the p_cutoff≈ 250 MeV), can bedescribed by the LQCD EoS at zero chemical potential.To calculate the mean free path, λ = 1 / nσ of the constituents in the thermalized gas of hadrons, resonances and Hagedorn states, one needs to rely onapproximations only, as the cross-sections for all the involved interactions are not available. Considering the fact that the pions are the most abundant constituents in anequilibrated hadron gas, for the present study, we approximate the temperature dependent number density (n^H_EV(T)) given by equation <ref> anddepicted in the Figure <ref>(a) as representing the pion-density and compare the temperature dependent mean free path of pions in thermalized hadron gas ofdifferent system-size. The temperature dependent pion-pion cross-sections in thermalized pion gas, used for the temperature dependent mean free path calculations, hasbeen obtained from the Ref.<cit.> and presented in Figure <ref>(b). Figure <ref> shows that the number density varies with the system-size ofthe hadron gas all through the considered temperature range. Further study in terms of the ratio of number density of finite (R = 5, 3 and 2.5 fm) and the infinitesystem-size of hadron resonance gas, reveals ( Figure <ref>) that the variation in the number density with the system-size decreases with increasing temperature.This feature of the number density for hadron gas of different sizes approaching closer value at high temperature is consistent with the observation <cit.> on thetemperature dependence of thermodynamical variables for the similar finite system-sizes of hadron gas. As has been pointed out in the Ref. <cit.>, the variationwith the system-size at high temperature disappears due to the dominant population of higher mass resonances, including the Hagedorn States, which, unlikethe low mass resonances, are not affected on implementation of finite size effect. As can be seen in the Figure <ref>, though the mean free path of pions for thehadron gas of different sizes, constrained with the LQCD EoS, is different in the lower region of the temperature scale, it asymptotically approaches the same value of ∼1 fm at T ∼ 160 MeV. The mean free path for hadron gas of different sizes approaching the same value at high temperature can be attributed to the cumulative effect ofreducing differences of the system-size-dependent number density and the system-size independent large interaction cross-sections at high temperature. By comparing the mean free path and the system-size, one can ideally assess the possibility of thermalization. However, for a quantitative comparison between thedegree of thermalization of the system formed in high-multiplicity pp events and that of the medium formed in relativistic heavy-ion collisions, we calculate the degree ofthermalization in terms of the dimensionless Knudsen number, Kn = λ / R, a ratio between the mean free path and the typical size of the system. A smallvalue of Kn, tending to zero, implies high degree of thermalization approaching the perfect fluid limit, while a large Kn indicates a system far from thermodynamicequilibrium and not suitable for application of fluid dynamics. We calculate Kn as a function of the system size of the considered hadron gas at T = 160 MeV, withthe λ calculated from different approximations on σ_ππ, like (i) all hadrons of radius 1 fm leading to the total cross section σ_T = πfm^2<cit.>, (ii) σ_meson-meson = 4/9 x (σ_baryon-baryon), σ_meson-baryon = 2/3 x (σ_baryon-baryon) while σ_baryon-baryon= πfm^2<cit.> (obviously for the discrete masses only) and iii) pion gas <cit.>, as depicted in Figure <ref>(b). From theFigure <ref>, itis clear that among the considered approximations, the cross-section option - (iii) gives the most conservative estimates on the system-size dependence of the Knudsennumber for the hadron gas and we, therefore consider the Kn for this option only for further study / discussions. As can be seen from the Figure <ref>, there is arapid variation in values of Kn in between the system-size R ≈ 4 (Kn ≈ 0.28) and 2.5 (Kn ≈ 0.48) fm.The Knudsen number for AuAu collisions at √(s)_NN = 200 GeV have been estimated by hydrodynamic and transport models. In hydrodynamical approach,the centrality dependent elliptic flow suggest <cit.>Kn ≈ 0.3 and 0.5 for the central and a semi-central AuAu events, respectively. A good agreementbetween the Boltzmann equation for dilute system and the relativistic dissipative fluid dynamics, the main tool to study the space-time evolution of the bulk matter formed inthe relativistic heavy-ion collisions at RHIC, is established <cit.> for Kn 0.5. To estimate the degree of thermalization in high multiplicity pp events, we compare the sizes of considered hadron gas and the pp events with average chargedparticle multiplicity, ⟨ N_ch⟩≈ 136 and higher, which exhibit collective properties in terms of the near-side long-range two-particle angular correlations<cit.> and also give p_T-dependence of the elliptic flow harmonic <cit.>, at √(s) = 7 TeV. The effective emission radius, R^', obtained <cit.>from the Bose-Einstein Correlations between pairs of identical bosons of the hadronizing system near the kinetic freeze-out, has been parameterized for the pp collisions,as a function of N_ch, as R^' (⟨ N_ch⟩) = a. ⟨ N_ch⟩^1/3,where a = 0.612 ± 0.007(stat.) ± 0.068(syst) fm for ppcollisions at √(s) = 7 TeV. So, the high-multiplicity pp event sample of ⟨ N_ch⟩ = 136, corresponding to the average effective emission radius,R^' equals 3.1 fm, may be represented by a system of hadron gas following the LQCD EoS and having the Knudsen number ∼ 0.36. These high-multiplicitypp events, satisfying the Knudsen number criterion, lie well within the limit of applicability of the fluid dynamics.§ SUMMARYIn summary, we have studied grand canonical ensemble of hadrons, resonances and Hagedorn states in finite system size, complying with the LQCD equation of state.We find, for any finite system-size of HRG that follows the LQCD EoS, the mean free path of pions asymptotically approaches the samevalue of 1 fm at T ≈ 160 MeV.The estimated degree of thermalization of hadron gas of size comparable to the size of high-multiplicity pp events appears consistent with that for the AuAucollisions at the RHIC. The study concurs the applicability of hydrodynamics in interpreting the features <cit.> of the multiparticle production inhigh-multiplicity pp events at the LHC. § ACKNOWLEDGEMENTSPG acknowledges useful discussions with Jan-s Alam, Partha Pratim Bhaduri and Abhijit Bhattacharyya. 99ref01 I. Arsene et al., BRAHMS Collaboration, Nucl. Phys.A757, 1 (2005). ref02 B. B. Back et al., PHOBOS Collaboration, Nucl. Phys. A757, 28 (2005). ref03 J. Adams et al., STAR Collaboration, Nucl. Phys.A757, 102 (2005). ref04 K. Adcox et al., PHENIX Collaboration, Nucl. Phys.A757, 184 (2005). ref05 V. Khachatryan et al., CMS Collaboration, J. High Energy Phys. JHEP09, 091 (2010). ref06 G. Aad et al., ATLAS Collaboration, Phys. Rev. Lett.116, 172301 (2016). ref07 V. Khachatryan et al., CMS Collaboration, Phys. Lett.B765, 193 (2017). ref08 S. Z. Belenkij and L. D. Landau, Nuovo. Cim. Suppl. 3 S10, 15 (1956). ref09 K. G. Wilson, Phys. Rev.D10, 2445 (1974). ref10F. Karsch, K. Redlich, and A. Tawfik, Phys. Lett.B571, 67 (2003). ref11 A. Andronic, P. Braun-Munzinger, J. Stachel, and M. Winn, Phys. Lett.B718, 80 (2012). ref12 M. Albright, J. Kapusta, and C. Young, Phys. Rev.C90, 024915 (2014). ref13 S. Borsanyi, Z. Fodor, S. D. Katz, S. Krieg, C. Ratti, and K. Szabo, J. High Energy Phys.01, 138 (2012). ref14 A. Bazavov et al. (HotQCD Collaboration), Phys. Rev.D86, 034509 (2012). ref15 A. Bazavov, H.-T. Ding, P. Hegde, O. Kaczmarek, F. Karsch, E. Laermann, Y. Maezawa, S. Mukherjee et al., Phys. Rev. Lett.113, 072001 (2014). ref16 S. Borsanyi, Z. Fodor, C. Hoelbling, S. D. Katz, S. Krieg, and K. K. Szabo, Phys. Lett.B730, 99 (2014). ref17 R. Hagedorn, Nuovo Cimento Suppl.3, 147 (1995). ref18 A. Majumder and Berndt Muller, Phys. Rev. lett.105, 252002 (2010). ref19 V. Vovchenko, D. V. Anchishkinm, and M. I. Gorenstein, Phys. Rev.C91, 024905 (2015). ref20 P. M. Lo, M. Marczenko, K. Redlich and C. Sasaki, Phys. Rev.C92, 055206 (2015). ref21 D. H. Rischke, M. I. Gorenstein, H. Stoecker and W. Greiner, Z. Phys.C51, 485 (1991). ref22 J. Cleymans and H. Satz, Z. Phys.C57, 135 (1993). ref23 R. Hagedorn and J. Rafelski, Phys. Lett.B97, 136 (1980). ref24 K. Redlich and K. Zalewski, arXiv: 1611.03746v1 [nucl-th] (2016). ref25 A. Bhattacharyya, R. Ray, S. Samanta and S. Sur, Rhys. Rev.C91, 041901(R) (2015). ref26 K. A. Olive et al., Chin. Phys.C38, 090001 (2014). ref27 F. S. Navarra, M. C. Nemes, U. Ornik and S. Pavia, Phys. Rev.C45, R2552 (1992). ref28 G. S. Denicol, C. Gale, S. Jeon and J. Noronha, Phys. Rev.C88, 064901(2013). ref29 R. S. Bhalerao, J. -P. Blaizot, N. Borghini and J. -Y. Ollitrault Phys. Lett.B627, 49 (2005). ref30 H. Drescher, A. Dumitru, C. Gombeaud and J. Olliitrault, Phys. Rev.C76, 024905 (2007). ref31 P. Huovinen and D. Molnar, Phys. Rev.C79, 014906 (2009). ref32 I. Bouras, E. Molnar, H. Niemi, Z. Xu, A. El, O. Fochler, C. Greiner and D. H. Rischke, Phys. Rev.C82, 024910 (2010). ref33 C. Gombeaud, T. Lappi and J. Olliitrault, Phys. Rev.C79, 054914 (2009). ref34 V. Khachatryan et al., CMS Collaboration, JHEPB05, 029 (2011).
http://arxiv.org/abs/1706.08679v2
{ "authors": [ "Nachiketa Sarkar", "Premomoy Ghosh" ], "categories": [ "hep-ph", "hep-ex", "nucl-th" ], "primary_category": "hep-ph", "published": "20170627061938", "title": "Thermalization in small system of hadron gas and high-multiplicity pp events" }
Self-Sustaining Caching Stations: Towards Cost-Effective 5G-Enabled Vehicular Networks Shan Zhang, Member, IEEE, Ning Zhang, Member, IEEE, Xiaojie Fang, Student Member, IEEE, Peng Yang, Student Member, IEEE, and Xuemin (Sherman) Shen, Fellow, IEEE Shan Zhang and Xuemin (Sherman) Shen are with the Department of Electrical and Computer Engineering, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, Canada, N2L 3G1 (Email:{s327zhan, sshen}@uwaterloo.ca). Ning Zhang is with the Department of Computing Science, Texas A&M University-Corpus Christi, 6300 Ocean Dr., Corpus Christi, Texas, USA, 78412 (Email: [email protected]). X. Fang is with the School of Electronics and Information Technology, Harbin Institute of Technology, Harbin, China (email: [email protected]). Peng Yang is with the School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China (email: [email protected]). This work is sponsored in part by the Nature Science Foundation of China No. 91638204 and the Natural Sciences and Engineering Research Council of Canada.December 30, 2023. ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ In this article, we investigate the cost-effective 5G-enabled vehicular networks to support emerging vehicular applications, such as autonomous driving, in-car infotainment and location-based road services. To this end, self-sustaining caching stations (SCSs) are introduced to liberate on-road base stations from the constraints of power lines and wired backhauls. Specifically, the cache-enabled SCSs are powered by renewable energy and connected to core networks through wireless backhauls, which can realize “drop-and-play” deployment, green operation, and low-latency services. With SCSs integrated, a 5G-enabled heterogeneous vehicular networking architecture is further proposed, where SCSs are deployed along roadside for traffic offloading while conventional macro base stations (MBSs) provide ubiquitous coverage to vehicles. In addition, a hierarchical network management framework is designed to deal with high dynamics in vehicular traffic and renewable energy, where content caching, energy management and traffic steering are jointly investigated to optimize the service capability of SCSs with balanced power demand and supply in different time scales. Case studies are provided to illustrate SCS deployment and operation designs, and some open research issues are also discussed. § INTRODUCTIONVehicular communication networks hold the promise to improve transportation efficiency and road safety, by enabling vehicles to share information and coordinate with each other. Several potential vehicular networking solutions have been proposed, such as IEEE 802.11p standard and cellular-based techniques <cit.>. Compared with other candidates, cellular-based vehicular networking can benefit from the existing cellular network infrastructures to provide ubiquitous coverage and better quality of service (QoS) <cit.>. In fact, 80% of on-road wireless traffic is served by cellular networks <cit.>. Therefore, cellular-based vehicular networking has drawn extensive attention from both academia and industry. Specifically, the 3rd Generation Partnership Project (3GPP) is currently specifying LTE enhancements to support both vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications, by integrating cellular and device-to-device interfaces <cit.>. The corresponding specification work will be finalized as a part of Release 14 in 2017, which can provide a full set of technological enabler from air interface to protocols. In addition, extensive LTE-vehicular trail testing are now on-going in different places, like Germany and China. Particularly, LTE-based vehicular networking has demonstrated its advantages in achieving significant message coverage gain, compared with IEEE 802.11 technologies in both high speed highway and congested urban scenarios <cit.>.Despite the favorable advantages, cellular networks still face tremendous challenges to meet the needs of future vehicular communications, and the most pressing one is network capacity enhancement. Currently, the on-road wireless traffic accounts for 11% of the cellular traffic <cit.>, which expects to dramatically increase due to the proliferation of connected vehicles and emerging applications such as autonomous driving, in-car infotainment, augment reality, and location-based road services. Deploying on-road base stations is the most effective way to increase vehicular network capacity. However, conventional base stations requires power lines and wired backhaul connections, making on-road deployment greatly challenging and costly. Furthermore, densification of on-road base stations may also lead to huge energy consumption and bring heavy burdens to backhauls, which can increase operational cost and degrade service performance.In this article, we first introduce a new type of 5G-enabled on-road base station, namely self-sustaining caching stations (SCSs), to enhance vehicular network capacity in a cost-effective way. Specifically, the SCSs have three features: (1) powered by renewable energy instead of power grid, (2) connected to the core network via millimeter wave (mmWave) backhauls, and (3) cache-enabled for efficient content delivery. By leveraging these 5G technologies, SCSs can be deployed flexibly in a “drop-and-play” manner without wired connections, enable green network operation without additional on-grid energy consumption, and improve delay performance by relieving backhaul pressures.Then, we propose a cost-effective heterogeneous vehicular network architecture, where SCSs are deployed along roadside to enhance network capacity while conventional macro base stations provide ubiquitous coverage and control signaling.To harness the potential benefits of proposed network architecture, we further design a hierarchical management framework to deal with challenges such as intermittent renewable energy supply and highly dynamic traffic demand. Particularly, cached contents are updated to maintain content hit rate considering vehicular mobility, while energy management and traffic steering are performed to balance power demand and supply in both large and small time scales. In addition, case studies are provided to illustrate the implementation of proposed architecture in details, including cache size optimization and sustainable traffic-energy management. The remainder of this article is organized as follows. In Section <ref>, the basics of SCSs is introduced, based on which a heterogeneous vehicular network architecture is proposed. Then, a hierarchical network management framework is designed in Section <ref>, and case studies are provided in Section <ref>. Finally, Section <ref> discusses future research topics, followed by the conclusions in Section <ref>.§ VEHICULAR NETWORK ARCHITECTURE WITH SCSS §.§ Cellular-Based Vehicular Networks With existing infrastructures and the state-of-the-art technical solutions, cellular-based vehicular networks hold the promise to provide ubiquitous coverage and support comprehensive QoS requirements in different scenarios. For example, the hidden terminal problems of the 802.11p standard can be totally avoided <cit.>. Besides, low latency and high reliability can be guaranteed even in dense traffic scenarios, with effective congestion control and resource management schemes. In spite of the aforementioned advantages, cellular-based vehicular networks still face significant challenges. With the rapid development of information and communication technologies, massive advanced on-road technologies and applications are emerging, such as autonomous driving, argument reality, infotainment services, and other location-based road services. As these data-hungry applications will bring the surge wireless traffic, improving vehicular network capacity has become an urgent issue. To this end, on-road base stations need to be deployed. However, conventional base stations are connected in a wired manner due to the requirement of on-gird power supply and backhaul transmission, which can cause the following problems. Firstly, conventional base stations rely on power lines and wired backhaul (e.g., optical fiber) to function, resulting in inflexible deployment especially in areas with undeveloped power lines or fiber connections (such as highways and rural areas). Secondly, the huge energy consumption can cause high operational expenditure as well as environmental concerns. Furthermore, with popularity in multimedia and localized services on the wheel, conventional base stations only offering connectivity might fail to provide satisfying QoS, due to the time-consuming file fetching from remote servers. §.§ Self-Sustaining Caching Stations (SCS) Considering the characteristics of vehicular networking and emerging on-road applications, we leverage promising 5G technologies and propose to deploy SCSs in addition to existing cellular networks to enhance vehicular network capacity in a cost-effective way. Specifically, SCSs are equipped with energy harvesting techniques and content caching units, which are connected to the core network through mmWave wireless backhuals. Equipped with solar panels or wind turbines, SCSs can harvest renewable energy to operate in a self-sustaining manner without the support of power grid[The typical solar panel with 15% conversion efficiency can harvest 100W energy only by a 82 cm × 82 cm solar panel under rated sunlight radiation, which is sufficient to power a micro (/pico) base station with power demand of 80 W (/8 W). <cit.>]. Exploiting renewable energy as a supplementary or alternative power sources is an inevitable trend in the 5G era and beyond, as wireless network energy efficiency expects to be improved by 1000 times <cit.>. In addition, renewable energy harvesting can liberate network deployment from power lines. Wireless backhaul can be supported by the mmWave wireless communication technologies. With large bandwidth unlicensed, mmWave bands can realize broadband wireless communication based on the massive multiple input multiple output (MIMO) and beamforming technologies <cit.>. Therefore, SCSs, which combine both energy harvesting and mmWave backhaul techniques, can be deployed in a “drop-and-play” manner with no wired constraints. Content caching empowers SCSs to store popular contents at the edge of networks, and thus reduce duplicate transmission from remote servers. As a matter of fact, the main on-road mobile applications is now generated by video streaming and map services, which are responsible for 80% and 15% of total traffic, respectively. The popularity of video contents has been found to follow power-law distribution. Accordingly, caching popular video contents in SCSs can effectively offload traffic from existing cellular systems. Moreover, the emerging on-road applications are expected to be location-based with concentrated request, which further make a strong case for content caching. In addition to capacity enhance, caching can also reduce transmission latency and relieve backhaul burdens, with contents stored closer to end users. Furthermore, caching schemes can be devised with respect to specific objectives, such as mobility-aware caching. Specifically, the content can be pre-fetched and stored in the next cells before the vehicles conduct handover, to realize smoother handover with high vehicle mobility. By combing these 5G technologies, SCSs can bring three-fold benefits of flexible deployment, green operation and enhanced QoS, paving the way to cost-effective vehicular networking. §.§ 5G-Enabled Heterogeneous Vehicular Network Architecture With SCSs integrated, a heterogeneous vehicular network architecture is formed and shown as Fig. <ref>. The conventional macro base stations (MBSs) and small cell base stations (SBSs) are connected with high speed wired backhauls and powered by conventional power grid, which mainly provide network coverage and control for reliability. Meanwhile, the SCSs are densely deployed for capacity enhancement, and mainly provide high speed data access based on stored contents. Furthermore, V2V communications are also enabled through device-to-device (D2D) links. The control and user planes are separated (i.e., C/U plane separation) for reliable and flexible access. Specifically, vehicles maintain dual connectivities: one with MBSs for signaling and control information, and the other with SCSs for high data rate transmissions or with other vehicles for instant message exchange. As MBSs can provide ubiquitous signaling coverage with large cell radius, such a separation architecture can better support vehicle mobility with less frequent handover. The proposed architecture can support both safety and non-safety-related vehicular applications. For the safety-related use case, critical-event (such as collision or emerging stop) wanning messages can be exchanged locally via V2V communications with extremely low latency. For the non-safety-related use cases, MBSs and SCSs can enable better driving experiences, through services such as road condition broadcast, parking assistant and in-car infotainment. In fact, the non-safety-related applications can be data hungry, and accounts for more than 90% on-vehicle traffic <cit.>. Accordingly, we mainly focus on V2I (vehicle-to-MBS and vehicle-to-SCS) communications, and investigate cost-effective accommodation solutions for the increasing vehicular traffic demand. With SCSs offloading traffic from MBSs, the service process for vehicle users is as follows. The vehicle user can be directly served if its required content is stored at the associated SCS, which is called the content hit case as shown in Fig. <ref>. Otherwise, the vehicle user is served by the MBSs, which is called the content miss case. To serve the content miss users, the associated MBS needs to fetch contents from remote data centers via wired backhauls, according to the conventional cellular communication technologies. With sufficient cache size and well-designed caching schemes, deploying SCSs can effectively reduce the traffic load of MBSs. Furthermore, the content hit vehicle users can enjoy better quality of experience (QoE) with lower end-to-end delay. As such, the proposed network architecture can provide high capacity for vehicular communications at lower cost. § HIERARCHICAL NETWORK MANAGEMENT FRAMEWORK §.§ Management Challenges §.§.§ Network Heterogeneity MBSs and SCSs exhibit distinct features with respect to coverage, user capacity, content access, etc. MBSs guarantee ubiquitous coverage with a large cell radius (e.g., several kilometers), and hence the associated users can enjoy less handover when moving at a high speed. However, the large coverage radius may also bring massive connections to each MBS. As a result, MBSs can only provide limited radio resources to each vehicle user at low transmission rates. On the contrary, each SCS covers relatively smaller area and serves less vehicle users at high transmission rates. Besides, the SCS users can get files without backhaul transmissions, which further reduces end-to-end delay. Nevertheless, SCSs mainly target on popular file transmission, and their small coverage radius may cause frequent handover issues. In addition to the heterogeneity of network infrastructures, vehicular services are at a wide-range requiring heterogeneous QoS requirements. For example, the safety and control messages are delay-sensitive but occupy limited radio resources, whereas non-safety-related applications such as social network on the wheel and map downloading requires large bandwidth but can endure longer delay. The heterogeneity of network resource and traffic demand need be taken into consideration for user association and resource management. §.§.§ Highly Dynamic Traffic Demand On-road wireless traffic are highly dynamic in both time and spatial domains due to the variations of vehicle intensity. For example, the traffic volume during rush hours can be 90 times of that in late night, while the traffic volume on one direction can be 4 time of the opposite direction at the same road segment <cit.>. Such traffic non-uniformity can pose great challenges to network management. Traffic bursty during rush hours may lead to service outage due to limited network resources, whereas network resources cannot be fully utilized during off-peak period. Besides, the spatial traffic imbalance may also lead to congestion in some cells while resource underutilized in other cells, degrading both service quality and network efficiency. In spite of traffic volume variation, the popularity distribution of different contents also varies with time, and thus SCSs need to update their content cache to maintain high content hit rate. §.§.§ Intermittent Energy Supply Unlike conventional power grid, renewable energy arrives randomly in an intermittent manner, which is likely to mismatch with traffic demand. For example, the solar powered SCSs cannot provide service after sunset well, when the vehicular networks may be still heavily loaded. On the contrary, on-road traffic can be very light at noon, while solar energy can be harvested at peak rate. The unbalanced power demand and supply can cause energy outage as well as battery overflow, which degrades system reliability and also leads to energy waste. Accordingly, energy sustainability is critical to the proposed network paradigm, which requires intelligent network management to minimize the probability of energy outage and overflow. In addition to the above mentioned challenges, there are also other issues need to be addressed, such as vehicle mobility, and time-varying mmWave backhaul capacity. To summarize, the network should fully utilize heterogeneous network resources to provide reliable on-demand service, so as to minimize operational cost while meeting differentiate QoS requirements of on-road mobile applications. §.§ Hierarchical Network Management To address the above mentioned challenges, we propose a hierarchical network management framework, as shown in Fig. <ref>. The proposed framework mainly includes three components: energy management, content caching, and traffic steering. Furthermore, network management are conducted in both large (e.g., minutes or hours) and small time scales (e.g., seconds) with different strategies. Energy Management mainly deals with the randomness of renewable energy supply. Specifically, we propose dynamic SCS sleeping and Radio Frequency (RF) power control to reshape renewable energy supply by manipulating the process of charging and discharging. Notice that the power consumption of an SCS consists of two parts: (1) constant power which is irrelevant with traffic load, and (2) RF power which scales with traffic demand, through adjusting the transmit power level or the number of utilized subcarriers. RF power control can reduce the RF power consumed by wireless transmission, while dynamic SCS sleeping can further reduce the constant part by completely deactivating the SCS. Although dynamic SCS sleeping is more effective for power saving, frequent switching may cause additional cost. Thus, SCS sleeping can be performed in large time scale, and then each active SCS further adjusts the RF power in small time scale. Hierarchical energy management can reshape renewable energy supply in time domain to match the power demand at SCSs. For example, the SCSs with insufficient energy can switch to sleep mode, while active SCSs with oversupplied energy can enlarge transmit power to offload more vehicular traffic. In this way, SCSs can achieve energy-sustainable operation with balanced power demand and supply. Content Caching schemes are critical for system performance, due to the limited storage capacity and constrained mmWave backhauls. Specifically, we consider two design objectives, i.e., content hit rate and mobility support. Content hit rate determines the maximal amount of traffic offloaded from MSBSs to SCSs, which reflects the service capability of SCSs. Meanwhile, mobility-aware caching can be implemented to realize seamless handover, where contents can be pro-actively fetched and stored at candidate cells based on handover prediction <cit.>. To realize these two objectives, the cache can be divided into two parts: one for the popular contents to guarantee content hit rate[Storing most popular contents can maximize content hit rate if SCSs do not cooperate with each other <cit.>.], and the other for mobility-aware caching. Notice that mobility-aware caching requires frequent content fetching at the same time scale of vehicle handover, whereas the content popularity distribution may vary at a relatively slow pace. As the capacity of mmWave backhaul is constrained and varies dynamically with channel conditions, mobility-aware caching can be conducted timely in small time scale, whereas the popular contents can be updated in large time scale opportunistically based on channel status. Furthermore, each RSUs should update content based on their own locations, since the on-road mobile traffic requests can show location-based popularity. Traffic Steering further reshapes traffic distribution to match the renewable energy supply, i.e., energy-aware load balancing. To this end, traffic offloading and content delivery are performed in different time scales, corresponding to energy management operations. In the large time scale, traffic offloading optimizes the amount of traffic served by each active SCS based on their renewable energy supply. For instance, SCSs with lower battery can serve fewer vehicle users, and vice versa. In the small time scale, energy-aware content delivery optimizes the transmission scheduling based on the SCS transmit power, to further improve QoS performance. For example, the delivery of best effort contents can be delayed when the transmit power is reduced, while SCSs can pro-actively push popular contents to vehicle users before requests when renewable energy is oversupplied. In essence, traffic offloading tunes the traffic load of each SCS (i.e., spatial traffic reshaping) while content delivery further adjusts traffic load at each time slot (i.e., temporal traffic reshaping). As such, traffic demand can be balanced with respect to renewable energy supply status. Notice that these three operations jointly affects the performance of SCSs. For each SCS, content delivery control should be conducted based on the available battery, stored contents, and offloaded traffic status, as shown in Fig. <ref>. Therefore, the joint optimization of caching, energy and traffic management can help to improve system performance, at the price of higher operational complexity. § CASE STUDIES Under the proposed management framework, many implementation problems still need to be addressed, such as caching design, intelligent energy and traffic management. In this part, we introduce two specific design examples on caching size optimization and sustainable traffic-energy management, respectively. Numerical results will be presented to offer insights into practical network deployment and operations. We consider a two-way highway scenario where SCSs are deployed regularly with coverage radius of 500 m. The file library consists of 1000 files whose popularity distribution follows Zipf function with exponent γ_f. The headway among neighboring vehicles follows exponential distribution of parameter λ_v. In fact, λ_v reflects the vehicle density, and a larger λ_v characterizes denser vehicle scenarios. Assume all vehicles are greedy sources with average data rate requirement of 10 Mbps, and each SCS can simultaneously serve 10 vehicle users at most due to radio resource limitations. §.§ Cache Size Design Fig. <ref> illustrates the amount of traffic that can be offloaded to each SCS under different traffic density λ_v and content popularity distributions γ_f[γ_f=0.56 comes from real data measurement of Youtube video streaming <cit.>, γ_f=1 can describe location-based services (e.g., map downloading) whose requests may present higher similarity.], which firstly increase but then level off with the increase of cache size. The reason is that the amount of offloaded traffic is also constrained by available radio resources[Notice that each SCS can simultaneously serve 10 users at most, each with data rate of 10 Mbps.]. Accordingly, the system performance can be divided into two regions, i.e., cache constrained region and radio resource constrained region, as shown in Fig. <ref>. In the cache constrained region, content hit rate is low and fewer vehicle users can be offloaded to SCSs, which corresponds to non-saturated case with under-utilized radio resources. As the cache size increases, more users can be offloaded to SCSs with higher content hit rate. Accordingly, the traffic of SCSs become saturated, and the throughput of SCS no longer increases due to the limitation of available radio resource. The obtained results reveal the Pareto optimality of cache size and SCS density, and offer insights into practical network deployment. For example, the optimal cache size should be larger than 31 files when the SCS coverage is 500 m, vehicle density is 0.01 /m, and popularity parameter γ_f = 1. Furthermore, the cost-optimal combination of cache size and SCS density for the given network capacity can be also found, given the cost functions of cache size and SCSs. §.§ Sustainable Traffic-Energy Management To reveal the importance of sustainable traffic-energy management, we study the service capability of the SCS under different traffic-energy management schemes. The greedy scheme is adopted as a baseline, where the SCSs always keep active and work at the maximal transmit power. With the sustainable traffic-energy management, an SCS goes into sleep if the available energy is insufficient to support its constant power consumption, otherwise it stays active and adjusts the transmit power and offloaded traffic amount based on the instant energy arrival rate. The redundant energy is saved in battery for future use, and the battery capacity is considered to be large enough without overflow. Fig. <ref>(a) illustrates the normalized traffic and energy profiles. Specifically, the two peaks of the traffic profile correspond to the on-road rush hours in the morning and afternoon respectively. Meanwhile, solar energy harvesting is considered, and the daily energy arrival rate is modeled as a sine function with a peak at noon. Under the considered traffic and energy profiles, the normalized offloaded traffic (i.e., the percentage of vehicle users offloaded to the SCSs) is shown as Fig. <ref>(b), where the peak energy arrival rate equals to the maximal power consumption of each SCS and the highest traffic density corresponds to the SCS capacity. As shown in Fig. <ref>(b), the sustainable traffic-energy management method outperforms the greedy scheme. Specifically, the sustainable traffic-energy management can increase SCS capacity to nearly 1.7 times compared with the greedy scheme, realizing cost-effective management. In fact, the greedy scheme can minimize the probability of battery overflow, which performs well with sufficient energy supply. The sustainable traffic-energy management scheme further reduces the probability of battery outage through dynamic SCS sleeping, which can better utilize energy with higher efficiency. § OPEN RESEARCH ISSUES As the study on cost-effective vehicular network is still at the infant age, there are many research issues remaining unsolved. §.§.§ Caching Scheme Design Under the proposed management framework, efficient caching schemes should be designed to maximize content hit rate while minimizing handover cost, by determining caching size splitting, popular content update, and mobility-aware caching. For the given caching splitting, the problems of popular content update and mobility-based caching can be both modeled as a Markov Decision Process (MDP), and dynamic programing or machine learning provide powerful solutions. Then, the optimal cache splitting can be further explored based on the designed schemes of popular content update and mobility-aware caching schemes. Notice that there exists a tradeoff between content hit rate and handover delay with different caching size splitting ratios. Accordingly, Pareto optimality can serve as the design criteria. §.§.§ Sustainable Traffic and Energy Management As demonstrated in the case study, the conventional greedy traffic offloading and energy management schemes are insufficient, due to the randomness of renewable energy and highly dynamic vehicular traffic. Sustainable traffic and energy management is desired to balance power demand and supply at each SCS, through the cooperation among neighboring SCSs and cellular networks. An optimization problem can be formulated to maximize the service capability SCSs, subject to energy casualty and QoS requirements of all users. The decision variables include the work mode, offloaded traffic amount, transmit power, and content delivery scheduling of each SCS. However, this problem can be extremely complex due to the multi-dimensional coupled optimization variables. In this case, the hierarchical management framework can be exploited for problem decoupling. Specifically, we can deal with the work mode and offloaded traffic amount in the large time scale, while adjust the transmit power and schedule content delivery in the small time scale. Then, low-complexity management schemes can be proposed for practical implementations. §.§.§ Cost-Effective SCS Deployment The introduction of SCSs also poses new design issues for network deployment, as discussed in the case study of cache size optimization. In fact, the service capability of SCSs can increase with denser SCS, larger cache size, or higher battery capacity. Accordingly, cost-effective SCS deployment should jointly optimize these system parameters to minimize the long-term network cost while meeting vehicular traffic demand. Specifically, the tradeoff among those system parameters should be carefully studied to obtain the cost-optimal combination. Stochastic geometry can be adopted for such large-scale system performance analysis, which can provide favorable closed-form results with reasonable approximations <cit.>. § CONCLUSIONS We have introduced a new type of on-road base station, namely SCS, to exploit renewable energy harvesting, mmWave backhaul, and content caching techniques to achieve flexible, sustainable, and cost-effective vehicular networking. With promising 5G technologies, SCSs can enable “drop-and-play” deployment, green operation, and low-latency content delivery, paving the way to cost-effective vehicular networking. Furthermore, a heterogeneous vehicular network architecture has been proposed to provide high capacity and better QoS to vehicle users, by efficiently exploring the specific advantages of SCSs and MBSs. In addition, a hierarchical management framework has been designed, where energy management, content caching, and traffic steering are performed in both large and small time scales to deal with the dynamics in energy supply and traffic demand. Case studies on cache size optimization and sustainable traffic-energy management have been conducted to provide insights into practical design of 5G-enabled vehicular networks. Moreover, important research topics on SCSs have also been discussed. IEEEtran100 url@samestyleAbboud16_VANET_interworking K. Abboud, H. A. Omar, and W. Zhuang, “Interworking of dsrc and cellular network technologies for V2X communications: A survey,” IEEE Trans. Veh. Technol., vol. 65, no. 12, pp. 9457–9470, Dec. 2016. Sun16_LTEV_mag S. Sun, J. Hu, Y. Peng, X. Pan, L. Zhao, and J. Fang, “Support for vehicle-to-everything services based on LTE,” IEEE Wireless Commun., vol. 23, no. 3, pp. 4–8, Jun. 2016. Malandrino16_OnRoadFog_data F. Malandrino, C. Chiasserini, and S. Kirkpatrick, “The price of fog: A data-driven study on caching architectures in vehicular networks,” in Proceedings of the First International Workshop on Internet of Vehicles and Vehicles of Internet.1em plus 0.5em minus 0.4emPaderborn, Germany: ACM, Jul. 2016, pp. 37–42. Seo16_LTEV_standard_mag H. Seo, K.-D. Lee, S. Yasukawa, Y. Peng, and P. Sartori, “LTE evolution for vehicle-to-everything services,” IEEE Commun. Mag., vol. 54, no. 6, pp. 22–28, Jun. 2016. Ericsson_LTEV S. Sorrentino, “LTE for intelligent transport systems,” Ericsson, Tech. Rep., Jun. 2016, accessed Sept. 13, 2016. [Online]. Available: <https://www.ericsson.com/research-blog/lte/lte-intelligent-transport-systems/> Omar13_vemac H. A. Omar, W. Zhuang, and L. Li, “VeMAC: A TDMA-based MAC protocol for reliable broadcast in VANETs,” IEEE Trans. Mobile Comput., vol. 12, no. 9, pp. 1724–1736, Jun. 2013. EH_net_cost_2013 G. Piro, M. Miozzo, G. Forte, N. Baldo, L. A. Grieco, G. Boggia, and P. Dini, “Hetnets powered by renewable energy sources: Sustainable next-generation cellular networks,” IEEE Internet Comput., vol. 17, no. 1, pp. 32–39, Jan. 2013. 5G_Overview_JSAC14_JAndrews J. G. Andrews, S. Buzzi, W. Choi, S. V. Hanly, A. Lozano, A. C. Soong, and J. C. Zhang, “What will 5G be?” IEEE J. Sel. Areas Commun., vol. 32, no. 6, pp. 1065–1082, May 2014. Qiao_proactive_mmWave J. Qiao, Y. He, and X. Shen, “Proactive caching for mobile video streaming in millimeter wave 5G networks,” IEEE Trans. Wireless Commun., vol. 15, no. 10, pp. 7187–7198, Oct. 2016. Bai09_vehicle_traffic_spatio_temporal F. Bai and B. Krishnamachari, “Spatio-temporal variations of vehicle traffic in vanets: facts and implications,” in Proceedings of the sixth ACM international workshop on VehiculAr InterNETworking, Beijing, China, Sep. 2009, pp. 43–52. Wang16_cache_mobility_mag R. Wang, X. Peng, J. Zhang, and K. B. Letaief, “Mobility-aware caching for content-centric wireless networks: modeling and methodology,” IEEE Commun. Mag., vol. 54, no. 8, pp. 77–83, Aug. 2016. Gong16_push_cache_EH_ICC J. Gong, S. Zhou, Z. Zhou, and Z. Niu, “Joint optimization of content caching and push in renewable energy powered small cells,” in IEEE ICC'16, Kuala Lumpur, Malaysia, May 2016, pp. 1–6. Gill07_youtube P. Gill, M. Arlitt, Z. Li, and A. Mahanti, “Youtube traffic characterization: a view from the edge,” in Proceedings of the 7th ACM SIGCOMM conference on Internet measurement, San Diego, USA, Oct. 2007, pp. 15–28. mine_TWC_SCoff S. Zhang, J. Gong, S. Zhou, and Z. Niu, “How many small cells can be turned off via vertical offloading under a separation architecture?” IEEE Trans. Wireless Commun., vol. 14, no. 10, pp. 5440–5453, Oct. 2015.
http://arxiv.org/abs/1706.08628v1
{ "authors": [ "Shan Zhang", "Ning Zhang", "Xiaojie Fang", "Peng Yang", "Xuemin", "Shen" ], "categories": [ "cs.NI", "cs.IT", "math.IT" ], "primary_category": "cs.NI", "published": "20170626235252", "title": "Self-Sustaining Caching Stations: Towards Cost-Effective 5G-Enabled Vehicular Networks" }
Hamiltonian cyclesin 3-tough 2K_2-free graphsSongling ShanDepartment of Mathematics, Vanderbilt University, Nashville, TN 37240, U.S.A. ([email protected]).December 30, 2023 ================================================================================================================================Abstract. A graph is called 2K_2-free if it does not contain two independent edges as an induced subgraph. Broersma, Patel, and Pyatkin showed that every 25-tough 2K_2-free graph with at least three vertices is hamiltonian.In this paper, we improve the required toughness in this result from 25 to 3. Keywords. Toughness; Hamiltonian cycle; 2K_2-free graph § INTRODUCTIONGraphs considered in this paper are simple, undirected,and finite. Let G be a graph. Denote by V(G) andE(G) the vertex set and edge set of G, respectively. For v∈ V(G),N_G(v) denotes the set of neighbors of v in G. For S⊆ V(G), N_G(S)=⋃_x∈ SN_G(x)-S. For H⊆ Gand x∈ V(G), define V_H(x)=N_G(x)∩ V(H) and V_H(S)=N_G(S)∩ V(H). Let S⊆ V(G).Then the subgraph induced by V(G)-S is denoted by G-S. For notational simplicity we write G-x for G-{x}. If uv∈ E(G) is an edge, we write u∼ v.Let V_1, V_2⊆ V(G) be two disjoint vertex sets. Then E_G(V_1,V_2) is the set of edges of G with one end in V_1 and the other end in V_2.The number of components of G is denoted by c(G). Let t≥ 0 be a real number. The graph is said to be t-tough if |S|≥ t· c(G-S) for each S⊆ V(G) with c(G-S)≥ 2. The toughness τ(G) is the largest real number t for which G is t-tough, or is∞ if G is complete. This concept, a measure of graph connectivity and “resilience” under removal of vertices, was introduced by Chvátal <cit.> in 1973.It iseasy to see thatif G has a hamiltonian cycle then G is 1-tough. Conversely,Chvátal <cit.>conjectured that there exists a constant t_0 such that every t_0-tough graph is hamiltonian.Bauer, Broersma and Veldman <cit.> have constructed t-tough graphs that are not hamiltonian for all t < 9/4, so t_0 must be at least 9/4.There are a number of papers onChvátal's toughness conjecture, and ithas been verified when restricted to a number of graph classes <cit.>, including planar graphs, claw-free graphs, co-comparability graphs, and chordal graphs. A graph G is called 2K_2-free if it does not contain two independent edges as an induced subgraph. Recently, Broersma, Patel and Pyatkin <cit.> proved that every 25-tough 2K_2-free graph on at least three vertices is hamiltonian.The class of 2K_2-free graphs is well studied, for instance, see <cit.>. It is a superclass of split graphs, where the vertices can be partitioned into a clique and an independent set. One can also easily check that every cochordal graph (i.e., a graph that is the complement of a chordal graph) is 2K_2-free and so the class of 2K_2-free graphs is at least as rich as the class of chordal graphs. In <cit.>, Gao and Pasechnik proposed the followingconjecture. Every 2-tough 2K_2-free graph with at least three vertices is hamiltonian. In this paper, we supportConjecture <ref> as well as improve the main result in <cit.> by showing the following result. Let G be a 3-tough 2K_2-free graph with at least three vertices. Then G is hamiltonian. In <cit.> it was shown that every 3/2-tough split graph on at least three vertices is hamiltonian.And the authors constructed a sequence {G_n}_n=1^∞ of split graphs with no 2-factor and τ(G_n)→ 3/2. So 3/2 is the best possible toughness for split graphs to be hamiltonian. Since split graphs are 2K_2-free, we cannot decrease the bound in Theorem <ref> below 3/2. Although we are not sure about the best possible toughness for guaranteeing2K_2-free graphs to be hamiltonian, we believe that Conjecture <ref> might be true. In fact, in the proof of Theorem <ref>, except for one case where 3-tough is needed, all other cases only need a toughness of 2.§ PROOF OF THEOREM <REF>We need the following lemma for the existence of a 2-factor in a graph.[Enomoto et al. <cit.>] Every k-tough graph has a k-factor if k|V(G)| is even and |V(G)|≥ k+1. We will also need some notation. Let C be an oriented cycle. For x∈ V(C), denote the successor of x by x^+ and the predecessor of x by x^-. Let S⊆ V(C) be an independent set in C. Then S^+={x^+ |x∈ S}, and S^- is defined similarly. Let D be another oriented cycle disjoint with C and T⊆ V(D) be an independent set in D. Then (S∪ T)^+=S^+∪ T^+ and (S∪ T)^-=S^-∪ T^-. For u,v∈ V(C), u v denotes the portion of C starting at u, following C in the orientation,and ending at v. Likewise, u v is the opposite portion of C with endpoints as u and v. Given two vertex-disjoint cycles C and D. SupposeP_c isa portion of C with endpoints u,v and P_d is a portion of D with endpoints x,y. If v and x are adjacent, we write uP_cvxP_dy as the concatenation of P_c and P_d through the edge vx.We will assume all cycles in consideration are oriented.[Proof of Theorem <ref>] The graph G is 3-tough, so it has a 2-factor by Lemma <ref>. We take a 2-factor of G such that it contains as few cycles as possible. Letbe the set of cycles in such a 2-factor. We may assume thatcontains at least two cycles. For otherwise, the only cycle inis a hamiltonian cycle of G.Let x∈ V(G) be a vertex. As cycles inform a 2-factor of G, there exists a unique cycle, say C∈ such thatx∈ V(C). If there exists a cycle D∈-{C} such that x is adjacent to two consecutive vertices on D in G,we sayx is ofA-type (w.r.t. D). If xis not of A-type w.r.t. any cycles in -{C}, we sayx is of B-type. DenoteA={x∈ V(G) | }B=V(G)-A.Letxy∈ E(C) be an edge.We say xy is of A-type if x,y∈ A∩ V(C); we say xy is of B-type if x,y∈ B∩ V(C); otherwise, xy is of AB-type. We say C is AB-alternating if all edges of C are of AB-type. It is clear that if C is AB-alternating, then C is an even cycle. For a cycle D∈-{C} and the edge xy∈ E(C), we denoteV_D(xy)=V_D(x)∪ V_D(y)_D(xy)=V(D)-V_D(xy),where recall that V_D(x)=N_G(x)∩ V(D).Let C,D∈ be two distinct cycles. If x∈ V(C) has a neighbor u∈ V(D),then x^+≁u^+, u^- and x^-≁u^+, u^-.Assume on the contrary that x^+∼ u^+. Then xu u^+x^+ x combines C and D into a single cycle. This gives a contradiction to the minimality of ||. Similar construction shows that x^+≁u^- and x^-≁u^+, u^-.No cycle incontaining an A-type edge.Assume on the contrary that there exists C∈ and xy∈ E(C) such that xy is of A-type.Let D,Q∈-{C} such that x∼ u, u^+ with uu^+∈ E(D) and y∼ v, v^+ with vv^+∈ E(Q). As x∼ u, u^+, y≁u, u^+ by Claim <ref>.Let z be the other neighbor of y on C.Then z∼ u or z∼ u^+ by consideringthe two independent edges yz and uu^+. By reversing theorientation of C, D if necessary, we assume that y=x^+ andz∼ u. Then {[ xu^+ v yv^+ uzx , if D=Q;; xu^+D uzxvyv^+ v, if D Q. ].Let C∈,xy∈ E(C). DenoteI_xy=⋃_D∈-{C}_D(xy).Then each of the following holds. (1) I_xy is an independent set in G.(2) If xy is of B-type, then for any D∈-{C}, vertices on D are alternating between I_xy and V(G)-V(C)-I_xy.(3) If xy is of B-type, then |I_xy|=1/2|V(G)-V(C)|.To show (1), assume on the contrary that there exist u,v∈ I_xy such that u∼ v. Then E_G({x,y},{u,v}) ∅ by the 2K_2-freeness assumption of G. Consequently, at least one of u and v is not an element in I_xy. This gives a contradiction.Assume now that xy is a B-type edge. Let D∈-{C}. We show that for any edge uv∈ E(D), there is one and exactly one vertex in {u,v} is in V_D(xy). One of u,v must be in V_D(xy) is guaranteed by the 2K_2-freeness of G. Suppose, w.l.o.g., that u∈ V_D(xy) with u∼ x. Then by Claim <ref>, v≁y. As x is of B-type, we further know that v≁x. Thus, v∈_D(xy).This gives (2). The statement (3) is an immediate consequence of (2).Let A^+ be the set of successors of vertices in A. Then A^+ is an independent set in G. Suppose on the contrary that there exist x^+, y^+∈ A^+ with x^+y^+∈ E(G).Assume x^+∈ V(C) with predecessor as x, y^+∈ V(D) with predecessor as y for cycles C, D∈.Then both x and y are A-type vertices. Let Q, R∈ with uu^+∈ E(Q) and vv^+∈ E(R) such that x∼ u, u^+ and y∼ v, v^+. As x∼ u, u^+ and y∼ v, v^+, we know that x^+≁u^-, u, u^+, u^++ and y^+≁v^-, v, v^+, v^++ by Claim <ref>. Since x^+y^+∈ E(G), by the 2K_2-freeness of G, y^+ is adjacent to one of u,u^+ and x^+ is adjacent to one of v, v^+. Thus,{u,u^+}∩{v, v^+}=∅. We consider two cases for completing the proof.Case <ref>.1: C=D. Case <ref>.1.1: C=D and Q=R. We combine C and Q into a single cycle as follows.{[ xu v^+yx^+ vu^+y^+ x,if x^+∼ v, y^+∼ u^+;; xu^+ vx^+y v^+uy^+ x,if x^+∼ v, y^+∼ u;; xu v^+x^+y vu^+y^+ x,if x^+∼ v^+, y^+∼ u^+;; xu^+ vyx^+ v^+uy^+ x,if x^+∼ v^+, y^+∼ u. ]. Case <ref>.1.2: C=D and Q R.Recall that {u, u^+}∩{v, v^+}=∅. Thus, E_G({u, u^+}, {v, v^+})∅. By reversing the orientations of Q and R if necessary, we assume that u∼ v. Thenxu^+ uvv^+yx^+y^+ xcombines C,Q and R into a single cycle. Case <ref>.2: C D.Case <ref>.2.1: C D and Q=R.As Q C and R D by the definition of A-type vertices, we have Q∉{C,D}.Recall that {u,u^+}∩{v, v^+}=∅.Thus, E_G({u, u^+},{v,v^+})∅, by the 2K_2-freeness of G. By reversing the orientation of Q if necessary, we assume y^+∼ u. Then uv^+∉E(Q). As otherwise, u=v^++ and so y^+∼ v^++. However, y^+≁v^++ by the argument prior to Case <ref>.1.We cover vertices in V(C)∪ V(D)∪ V(Q) by one or two cycles as below.{[xu^+ v y y^+ x^+ x, if u^+v∈ E(Q);;xu v^+ y y^+ x^+ x, u^+ vu^+, if u^+∼ v but u^+v∉E(Q);;xuv^+ u^+ vyy^+ x^+x, if u^+∼ v^+;;xu^+v u v^+yy^+ x^+x, if u∼ v;; xu^+v yy^+ x^+x,u v^+ u, if u∼ v^+. ]. Case <ref>.2.2: C D and Q R.As x^+≁u, u^+, and x^+∼ y^+, we get y^+∉{u, u^+}.Consequently, y∉{u^-, u}. Similarly, x^+∉{v, v^+} andx∉{v^-, v}.Case <ref>.2.2.1: C D,and Q=D, R=C. Again E_G({u, u^+},{v,v^+})∅ and E_G({u^+, u^++},{v,v^+})∅ by the 2K_2-freeness of G. We combine C and D into a single cycle as follows.{[ xu y^+ x^+ vu^+ y v^+ x, if u^+∼ v;; xu y^+ x^+ vy u^+ v^+ x, if u^+∼ v^+;;xu^+ yv x^+y^+ u v^+ x, if u∼ v^+;;xu^+y^+ x^+v u^++ y v^+x, if u∼ v,u^+≁v, v^+, and u^++∼ v;;xu^+y^+ x^+v y u^++ v^+x, if u∼ v,u^+≁v, v^+, and u^++∼ v^+. ].Case <ref>.2.2.2: C D, Q R,and Q=D, R∉{C,D}.We cover vertices in V(C)∪ V(D)∪ V(R) by one or two cycles as below.{[ xu^+ yv^+ vu y^+x^+ x, if u∼ v;; xu^+ yv v^+u y^+x^+ x, if u∼ v^+;; xuy^+ x^+x, u^+y v^+vu^+ if u^+∼ v;; xuy^+ x^+x, u^+y vv^+u^+ , if u^+∼ v^+. ]. Case <ref>.2.2.3: C D, Q R,and R=C, Q∉{C,D}.This case is symmetric to Case <ref>.2.2.2, so we skip its proof.Case <ref>.2.2.4: C D, Q R,and Q D, R C.By reversing the orientations of Q and R if necessary, we assume that u∼ v. Thenxu^+ uv v^+y y^+x^+ xcombines C, D, Q, R into a single cycle.We may assume thatcontains exactly one cycle C such that C has a B-type edge, and all other cycles inare AB-alternating. By Claim <ref> that A^+ is an independent set in G, we know that not all cycles inare AB-alternating. As otherwise, let S=A. Then c(G-S)=|A^+|=|A|=|S|. We get that τ(G)≤|S|/|c(G-S)|=1<3. This gives a contradiction.We then claim that contains no two cycles, say C and D both containinga B-type edge. Assume on the contrary that both C and D contain a B-type edge. Suppose, w.l.o.g., that |V(C)|≤ |V(D)|. Let xy∈ E(C) be of B-type. By Claim <ref>, I_xy, the set of non-neighbors of x and y in V(G)-V(C), is an independent set in G, Thus, I_xy∪{x} is also an independent set in G. Let S=V(G)-(I_xy∪{x}). Then G-S has |I_xy∪{x}| components, each being an isolated vertex. Note that|I_xy|=|V(G)-V(C)|/2=|V(G)-V(C)-V(D)|/2+|V(D)|/2 by Claim <ref>, and |V(C)|≤ |V(D)|. Thus,τ(G)≤ |S|/c(G-S)= |V(C)|-1+|V(G)-V(C)|/2/|I_xy|+1= |V(C)|-1+|V(G)-V(C)-V(D)|/2+|V(D)|/2/|V(G)-V(C)-V(D)|/2+|V(D)|/2+1≤ |V(D)|-1+|V(G)-V(C)-V(D)|/2+|V(D)|/2/|V(G)-V(C)-V(D)|/2+|V(D)|/2+1=|V(G)-V(C)-V(D)|/2+3|V(D)|/2-1/|V(G)-V(C)-V(D)|/2+|V(D)|/2+1 <3,showing a contradiction to theassumption that τ(G)≥ 3. (In fact, this is the only case where3-tough is used.)We now fix C∈ to denote the cycle which contains a B-type edge, and assume that all other cycles in -{C} are AB-alternating.Let D∈-{C} and xy∈ E(C) be of B-type. Assume that V_D(xy)∩ B∅. Then V_D(xy)=B∩ V(D), and either V_D(x)=∅ and V_D(y)=B∩ V(D) or V_D(x)=B∩ V(D) and V_D(y)=∅. Recall that _D(xy) is an independent set in G, vertices on D are alternating between _D(xy) and V_D(xy) by (2) of Claim <ref>. Because D is AB-alternating, we then get V_D(xy)=B∩ V(D) if V_D(xy)∩ B∅.And so if V_D(x)=∅, then V_D(y)=B∩ V(D); and if V_D(y)=∅, then V_D(x)=B∩ V(D). Thus, we only show that either V_D(x) or V_D(y) has to be empty.Assume to the contrary that V_D(x)∅ and V_D(y)∅. As vertices on D are alternating between _D(xy) and V_D(xy), we can choose u∈ V_D(x) so that u^++∈ V_D(y). Then u^+∈ A∩ V(D). Assume that u^+ is of A-type w.r.t. Q∈-{D}, i.e., u^+∼ v, v^+ with vv^+∈ E(Q).Assume, w.l.o.g., that y=x^+. As x∼ u and y∼ u^++, we have that u^+≁x,y by Claim <ref>. Thus, {v, v^+}∩{x,y}=∅.Case <ref>.1: Q=C.We combine C, D into a single cycle as xuu^++yv u^+ v^+x.Case <ref>.2:Q C.We cover V(C)∪ V(D)∪ V(Q) by two cyclesas xuu^++yx and vv^+ u^+ v. Let D∈-{C} and x∈ V(C). Assume that V_D(x)=B∩ V(D), then V_D(x^+)=V_D(x^-)=∅. Note first that N_G(x^+)∩ (B∩ V(D))=∅ and N_G(x^-)∩ (B∩ V(D))=∅. As otherwise, some vertex in B∩ V(D) is adjacent to both vertices in {x, x^+} or {x,x^-}. This implies that the vertex is of A-type w.r.t. C. Then we observe that neither x^+ nor x^- is adjacent to any vertex in A∩ V(D) by Claim <ref>. Letx∈ V(C). Assume there exists D∈ so thatV_D(x)=B∩ V(D).Then {x^+}∪ A^+is an independent set in G.As A^+ is already an independent set in G by Claim <ref>, we assume on the contrary that there exists w∈ A^+ so that x^+∼ w.Note that x w, since V_D(x)=B∩ V(D) and V_D(w)∩ B=∅. Assume w∈ V(Q) for some cycle Q∈.Then the predecessorw^- of w on Q is of A-type.Note that V_D(x^+)=∅ by Claim <ref> and x^+∼ w implies that Q D. Let R∈-{Q} with vv^+∈ E(R) so that w^-∼ v, v^+. Let z∈ A∩ V(D). As V_D(x^+)=∅ and x^+∼ w, w is adjacent to one of z and z^+ by the 2K_2-freeness of G. Since D is AB-alternating by Assumption <ref> and z∈ A∩ V(D), z^+∈ B∩ V(D). We see that w∼ z, because both w, z^+∈ A^+ and A^+ is an independent set in G by Claim <ref>. As w^-∼ v, v^+, w≁v, v^+ by Claim <ref>. Thus, E_G({w^+}, {v, v^+})∅.We consider two cases for finishing the proof. Case <ref>.1: Q C.As x^+∼ w, we have w^-≁x^++, x by Claim <ref>. Since w^-∼ v, v^+,we then have that v, v^+∉{x, x^+, x^++}. Case <ref>.1.1: Q C and R=C.We combine C,D, Q into one single cycle as below.{[ x^+wz z^-xv^+ w^-w^+ vx^+ ,if w^+∼ v;; x^+wz z^-xv^+ w^+ w^- vx^+,if w^+∼ v^+. ].Case <ref>.1.2: Q C and R= D.By the assumption, V_D(x^+)=∅; particulary, x^+≁v, v^+. Since w^-∼ v, v^+, w≁v, v^+ by Claim <ref>. But x^+w and vv^+ are two induced disjoint edges.This gives a contradiction to the 2K_2-freeness.Case <ref>.1.2: Q C and R∉{C, D}.Since w^-∼ v, v^+, w≁v, v^+ by Claim <ref>. Thus, x^+∼ v or x^+∼ v^+. By reversing the orientation of R if necessary, we assume x^+∼ v. Since V_D(x)=B∩ V(D) and z^+∈ B∩ V(D), x∼ z^+. Thenxz^+ z w w^-v^+ vx^+xis a cycle which contains all the verticesin V(C)∪ V(D)∪ V(Q)∪ V(R).Case <ref>.2: Q=C.As z∼ w, w^- ≁z^+, z^- by Claim <ref>. Since w^-∼ v, v^+, we then get that v, v^+∉{z^-, z, z^+}.Case <ref>.2.1: Q=C and R=D.By the assumption, V_D(x^+)=∅; particulary, x^+≁v, v^+. Since w^-∼ v, v^+, w≁v, v^+ by Claim <ref>. But x^+w and vv^+ are two induced disjoint edges. This gives a contradiction to the 2K_2-freeness. Case <ref>.2.2: Q=C and R D. Since w^-∼ v, v^+, w≁v, v^+ by Claim <ref>. Thus, x^+∼ v or x^+∼ v^+. By reversing the orientation of R if necessary, we assume x^+∼ v. Thenxz^+ z w x x^+vv^+w^-x^+are two cycles which together contain all thevertices in V(C)∪ V(D)∪ V(R).Letx∈ V(C).If there exists a cycle D∈-{C} such that V_D(x)=B∩ V(D), then wesay that x is bad w.r.t. D. DefineV_bad={x∈ V(C) |x }.The vertex set V_bad contains no two consecutive vertices on C. Moreover, no other vertex in V(C)-V_bad is adjacent to any B-type vertex on any cycles other than C. Each vertex in V_bad is adjacent to some B-type vertex on cycles other than C by the definition. Letv∈ V_bad. Thenby Claim <ref>,v^+ is not adjacent to any B-type vertex on any cycles other than C.Hence, for any vertex w∈ V(C), w or w^+ does not belong to V_bad.Thus, V_bad contains no two consecutive vertices on C.To proof the second part of the statement, assume that v∈ V(C) is a vertex adjacent to some B-type vertex on a cycle D∈-{C}. Since vertices in (A∩ V(C))^+ are not adjacent to any B-type vertices on cycles other than C, v^- is a B-type vertex. If v is also of B-type, then by Claim <ref>, V_D(v)=B∩ V(D). So v∈ V_bad by the definition of V_bad. If v is of A-type, then v∈ V_bad again by the definition of V_bad.Letxy∈ E(C) be a B-type edge. For any cycle D∈-{C}, if V_D(xy)=B∩ V(D), then for any z∈ A∩ V(D), z is of A-type w.r.t. only the cycle C. As xy∈ E(C) is of B-type, for each cycle Q∈-{C}, vertices on Q are alternating between I_xy and V(G)-V(C)-I_xy, by (2) of Claim <ref>. As I_xy is an independent set in G by (1) of Claim <ref>, and A∩ V(D)⊆ I_xy, for any z∈ A∩ V(D), it is not possible for z to be adjacent to two consecutive vertices on any cycle Q∈-{C,D}. Thus, z is of A-type w.r.t. only the cycle C.For each vertex x∈ V_bad, we defineU_x^0={x^+} U_x^1={u | u^+∈ V_C(U_x^0)-V_bad}-U_x^0.For eachvertex x_1∈ U_x^1, define the pathP_[x_1,x]=x_1 x^+ x_1^+xto be the directed path started at x_1 and ended at x.Start now on, if v is a vertex on a directed path and v is not the end of the path, we denote by v^†the successor of v on this path. This notation v^† will be only used in the following occasion.It is easy to see that for anyx_2 ∈ V(P_[x_1,x]) such that x_2^†∈ V_P_[x_1, x](x_1), P_[x_2, x]=x_2_[x,x_1]x_1x_2^†_[x,x_1]x is a directed path starting at x_2 and ending at x. Furthermore,P_[x_2, x] contains all the vertices of C. In general, for i≥ 2 we defineU_x^i= {u |u^†∼ v,}-⋃_j=0^i-1U_x^jU_x^∞ = ⋃_i=0^∞U_x^i.Let x∈ V_bad and let U_x^i be defined as above. Let D∈-{C} such that x is bad or of A-type w.r.t. D, and let u∈ B∩ V(D) such that x∼ u. Then each of the following holds. (1)U_x^∞⊆_C(uu^+), i.e., for any v∈ U_x^∞, v≁u, u^+.(2) For any y∈ V(C)-V_bad such that y is adjacent to some vertex in U_x^∞, y∼ u^+.(3) If x isbad and v∈ U_x^∞, then V_D(v)=∅. (4) If x is bad and y∈ V(C)-V_bad such that y is adjacent to some vertex in U_x^∞, then V_D(y)=A∩ V(D).We first prove (1) and (2) simultaneously by applying induction on i.For i=0, U_x^0={x^+}. As x∼ u, we have that x^+≁u^+ by Claim <ref>. Furthermore, as u is a B-type vertex, u≁x^+. Hence, x^+∈_C(uu^+). For any y∈ V(C)-V_bad such that y∼ x^+, since x^+∈_C(uu^+), y has to beadjacent to at least one of u, u^+ by the 2K_2-freeness. As y∈ V(C)-V_bad, y∼ u^+ by the second part of Claim <ref>. Assume now that both (1) and (2) are true for i-1 with i≥ 1. Let v∈ U_x^i. By the definition of U_x^i, there exists w∈ U_x^i-1 such that v∈ V(P_[w,x]) and w∼ v^†, where v^†∈ V(C)-V_bad is the successor of v on the directed path P_[w,x]. By the induction hypothesis, v^†∼ u^+. Also, by the induction hypothesis, U_x^j⊆_C(uu^+) for any j≤ i-1. Thus, v^†∉⋃_j=0^i-1U_x^j. Furthermore,v∉⋃_j=0^i-1U_x^j as U_x^i is disjoint with ⋃_j=0^i-1U_x^j by its definition. Since any edge on P_[w,x] which is not an edge of C has one endvertex in ⋃_j=0^i-1U_x^j, vv^† is an edge on C. Thus, as v^†∼ u^+, v≁u by Claim <ref>. Furthermore, v≁u^+. For otherwise, if v∼ u^+, then as x∼ u, and P_[v,x] is a spanning path of C, we get a cycle v_[v,x]xu u^+ v, which combines C and D into a single cycle. Thus, v∈_C(uu^+). For any y∈ V(C)-V_bad such that y∼ v, since v∈_C(uu^+), y has to be adjacent to at least one of u, u^+ by the 2K_2-freeness. As y∈ V(C)-V_bad, y∼ u^+ by the second part of Claim <ref>.For the statements (3) and (4), we see that immediately by noticing that the cycle D is AB-alternating and x is adjacent to all the B-type vertices on D if x is bad. DefineU^∞=⋃_x∈ V_bad U_x^∞Letv∈ U_x^∞ and D∈-{C} such that x is bad or of A-type w.r.t. D. Then v is called co-absorbable w.r.t. C and D if there exists a cycle R containing all the vertices in V(C)∪ V(D)-{v}.Each vertexv∈ U_x^∞ is co-absorbable w.r.t. C and a cycle D∈-{C} such that x is bad or of A-type w.r.t. D. If v∈ U_x^0, then v=x^+. Let u∈ B∩ V(D) such that x∼ u, and such thatx∼ u, u^+ if x is ofA-typew.r.t. D. Then x^+≁u^+ by Claim <ref>. Furthermore, x^+≁u as u∈ B∩ V(D). Thus, x^++∼ u or x^++∼ u^+. Since D is AB-alternating, u^+ is of A-type. By Claim <ref>, u^+ is of A-type w.r.t. only C. Let ww^+∈ E(C) such that u^+∼ w,w^+. If x is bad w.r.t. D, then x∼ u, u^++. And if x∼ u, u^++, then u^+≁x, x^+ by Claim <ref>. Thus, {x, x^+}∩{w,w^+}=∅ ifx is bad w.r.t. D. Then{[ xu u^+ x^++ x,if x^++∼ u^+;; xu^+ u x^++ x, if x^++∼ uand x is ofA-type;;xu^++ u x^++ wu^+w^+ x, if x^++∼ u and x is bad. ].is a cycle containing all the vertices in V(C)∪ V(D)-{x^+}.We additionally show that x^- is co-absorbable w.r.t. C and D. (We will need this in the argument when i≥ 1.) Repeat the same argument for x^–, we then have{[ xu u^+ x^– x,if x^–∼ u^+;; xu^+ u x^– x, if x^–∼ uand x is ofA-type;;xu^++ u x^– w^+u^+w x, if x^–∼ u and x is bad. ].is a cycle containing all the vertices in V(C)∪ V(D)-{x^-}.Assume now that v∈ U_x^i for i≥ 1. By the definition of U_x^i, we know there exists a spanning path P_[v,x] of C with endvertices v and x. By Claim <ref>, v≁u, u^+. Let y be the neighbor of v on P_[v,x]. As vy is an edge, and v≁u, u^+, y∼ u or y∼ u^+.Since U_x^j⊆_C(uu^+) for any j≤ i-1, we have that y∉⋃_j=0^i-1U_x^j. Furthermore,v∉⋃_j=0^i-1U_x^j as U_x^i is disjoint with ⋃_j=0^i-1U_x^j by its definition. Thus, vy is an edge on C, since any edge on P_[v,x] which is not an edge of C has one endvertex in ⋃_j=0^i-1U_x^j.We may assume that y∉V_bad, asboth the predecessor andsuccessor ofa bad vertex on C is co-absorbable by the argument for i=0 case. Thus y∼ u^+ by (2) of Claim <ref>. Then y_[v,x] xuu^+ y is the desired cycle. We may assume that each vertex in U^∞ has less than (|V(G)|-1)/3 neighbors in G. Suppose on the contrary that there exists v∈ U^∞ so that |N_G(v)|≥ (|V(G)|-1)/3. By Claim <ref>, we see that vis co-absorbable w.r.t. C and some cycle D∈-{C}. By standard arguments for longest cycles,we know that v has no two neighbors which are consecutive on any cycle Q∈-{C,D} and on the cycle which is the combination of C-v and D; and also that (N_G(v))^+, the set of the successorsofneighbors of v from the cycle which is the combination ofC-v and D and cycles in -{C,D}, is an independent set in G. Let S=V(G)-(N_G(v))^+-{v}. Then c(G-S)=|(N_G(v))^+∪{v}|≥ (|V(G)|-1)/3+1>|V(G)|/3. So τ(G)≤|S|/c(G-S)< 2. This achieves a contradiction. Each of the following holds. (1)The set U^∞is an independent set in G.(2)V_C(U^∞)∩ U^∞=∅.(3)U^∞∪ A^+ is an independent set in G.To prove (1), assume that there exist u, v∈ U^∞ such that uv∈ E(G). By Claim <ref>, u and v in total have at most 2(|V(G)|-1)/3 neighbors in G. As uv is an edge, and G is 2K_2-free, the set of non-neighbors of u and v in G forms an independent set in G. Let S=N_G(u)∪ N_G(v)-{u}. Then c(G-S)=|V(G)-S-{u}|>|V(G)|/3. So τ(G)<2. Again, we achievea contradiction to the assumption that τ(G)≥ 3. As U^∞is an independent set in G, we have V_C(U^∞)∩ U^∞=∅. Since each bad vertex x is adjacent to its successor x^+, and x^+∈ U_x^0⊆ U^∞, we have that V_bad⊆ V_C(U^∞). Thus,no vertex in U^∞ is adjacent to any B-type vertex on cycles other than C. Since (A∩ V(C))^+⊆ U^∞, we know that U^∞∪ A^+ is an independent set in G. For any vertex y∈ V_C(U^∞), there exists v∈ U^∞ such that vy∈ E(C). Assume that y∈ V_C(U_x^∞) for some x∈ V_bad. The Claim trivially holds if y∈ V_C(U_x^0). So assume that i≥ 1 and let y∈ V_C(U_x^i)-V_C(⋃_j=0^i-1 U_x^j). By the definition of U_x^i, we know that there exists w∈ U_x^i, and a spanning path P_[w,x] of C with endvertices as w and x such that yis a neighbor of w on P_[w,x]. Since V_C(U^∞)∩ U^∞=∅ by (2) of Claim <ref>, y∉U_x^∞. By the assumption that y∈ V_C(U_x^i)-V_C(⋃_j=0^i-1 U_x^j), we know that the predecessor v of y on P_[w,x] satisfies that v∉⋃_j=0^i-1 U_x^j. As any edge of P_[w,x] which is not an edge of C has one end contained in ⋃_j=0^i-1 U_x^j, we then know that vy∈ E(C). |V_C(U^∞)|≤ 2|U^∞|. Since U^∞ is an independent set in G by Claim <ref>, |N_C(U^∞)|≤ 2|U^∞|. Let y∈ V_C(U^∞) be any vertex. By Claim <ref>, there exists v∈ U^∞ such that vy∈ E(C). Thus, V_C(U^∞)⊆ N_C(U^∞). So |V_C(U^∞)|≤ |N_C(U^∞)|≤ 2|U^∞|.LetS=A∪ V_C(U^∞).We claim that each vertex in A^+∪ U^∞ is an isolated vertex in G-S. This is because A^+∪ U^∞ is an independent set in G, and all the possible neighbors of vertices in A^+∪ U^∞ in G are contained in S. Note also that |V_C(U^∞)|≤ 2|U^∞| by Claim <ref>, and |S∩ (V(G)-V(C))|=|V(G)-V(C)-S|=|V(G)-V(C)|/2 as we assume that all cycles in -{C} are AB-alternating. Since A∩ V(C)⊆ V_bad by the definition of V_bad, and V_bad⊆ V_C(U^∞), we have that A∩ V(C)⊆ V_C(U^∞). Thus, S=A∪ V_C(U^∞)=V_C(U^∞)∪ (A∩ (V(G)-V(C))) andthus |S|=|V_C(U^∞)|+|V(G)-V(C)|/2. Henceτ(G)≤ |S|/c(G-S)≤ |V_C(U^∞)|+|V(G)-V(C)|/2/|U^∞|+|V(G)-V(C)|/2≤ 2|U^∞|+|V(G)-V(C)|/2/|U^∞|+|V(G)-V(C)|/2<2,showing a contradiction. The proof of Theorem <ref> is now complete. AcknowledgementsThe author is extremelygrateful to Professor Mark Ellingham for his careful comments and suggestions in improving the proofs and the writing of this paper. plain
http://arxiv.org/abs/1706.09029v1
{ "authors": [ "Songling Shan" ], "categories": [ "math.CO" ], "primary_category": "math.CO", "published": "20170627195910", "title": "Hamiltonian cycles in 3-tough $2K_2$-free graphs" }
[]978-1-5090-6505-9/17/$31.00 2017 IEEE []Towards Modern Inclusive Factories: A Methodology for the Development of Smart Adaptive Human-Machine Interfaces Valeria Villani1, Lorenzo Sabattini1, Julia N. Czerniak2,Alexander Mertens2, Birgit Vogel-Heuser3 and Cesare Fantuzzi1 1Department of Sciences and Methods for Engineering (DISMI) University of Modena and Reggio Emilia, Reggio Emilia, ItalyEmail: {valeria.villani, lorenzo.sabattini, cesare.fantuzzi}@unimore.it 2Institute of Industrial Engineering and Ergonomics RWTH Aachen University, Aachen, GermanyEmail: {j.czerniak, a.mertens}@iaw.rwth-aachen.de 3Institute of Automation and Information Systems,Technical University of Munich, Munich, GermanyEmail: [email protected] 30, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Modern manufacturing systems typically require high degrees of flexibility, in terms of ability to customize the production lines to the constantly changing market requests. For this purpose, manufacturing systems are required to be able to cope with changes in the types of products, and in the size of the production batches. As a consequence, the human-machine interfaces (HMIs) are typically very complex, and include a wide range of possible operational modes and commands. This generally implies an unsustainable cognitive workload for the human operators, in addition to a non-negligible training effort. To overcome this issue, in this paper we present a methodology for the design of adaptive human-centred HMIs for industrial machines and robots. The proposed approach relies on three pillars: measurement of user's capabilities, adaptation of the information presented in the HMI, and training of the user. The results expected from the application of the proposed methodology are investigated in terms of increased customization and productivity of manufacturing processes, and wider acceptance of automation technologies. The proposed approach has been devised in the framework of the European project INCLUSIVE.§ INTRODUCTIONModern automatic machines and robotic cells in production plants are becoming more and more complex because of higher demands for fast production rate with high quality. Over these basic functions, today'sfactories need to allow for higher levels of product customisation and variable requirements. To this end, advanced functions are implemented, such as fault diagnosis and fast recovery, fine-tuning of process parameters to optimize environmental resources, fast reconfiguration of the machine and robot parameters to adapt to production change. Despite high levels of automation of machines and robots, humans remain central to manufacturing operations since they take charge of control and supervision of manufacturing activities. Human operators interact with machines and robots by means of user interfaces that are the modern cockpit of any production plant. For example, they set up machine production parameters, identify and solve faults, coordinate machine and robot re-configuration to enable adaptation to product changes. These activities are all performed by means of computerized human-machine interfaces (HMIs) that are inevitably becoming more and more complex, as new functions are implemented by the production system <cit.>.In this new scenario, human operators experience many difficulties to interact efficiently with the machine; this is particularly true for middle age workers who feel uncomfortable in the interaction with a complex computerized system, even if they have a great experience with the traditional manufacturing processes. On the other hand, complex HMIs linked to complex machine and robot functions create a barrier to young inexperienced or disabled people, who are then unable to effectively manage the production lines.Such an increasing gap between machine complexity and user capabilities calls for smart and innovative human-centred automation approaches that lead to the determination of adequate levels of automation for optimal flexibility, agility and competitiveness of highly customised production on the one side, and, on the other side, a sustainable effort for all workers. Accordingly, novel automation systems should embed HMIs that accommodate to the workers' skills and flexibility needs, by compensating their limitations (e.g. due to age or inexperience) and by taking full advantage of their experience.Moving along these lines, in this paper we present a methodology for the design of adaptive human-centred HMIs for industrial machines and robots. It consists in enabling user interfaces to measure the user capabilities, experience and cognitive burden and adapt the complexity and information load accordingly. In particular, according to the proposed methodology, an adaptive user interface for industrial machines can be developed that fully adapts to user's (1) physical status and impairments, (2) cognitive status and mental workload, and (3) experience in the working scenario and in the use of computers. Adaptation concerns visual presentation of information, selection of displayed content, selection of machine functionalities enabled to the user and guidance in the interaction with the process through default recipes and working strategies. Additionally, the interface provides off-line and, more importantly, on-line training to the user in order to increase her/his performance and prevent errors. These solutions aim at improving worker's situation awareness for a more effective, reliable and prompt interaction with the system, thus allowing workers to have a full comprehension of the system behavior and facilitate intervention in dynamic and unforeseen situations.The final goal is to create an inclusive <cit.> and flexible working environment for any kind of operator, taking into account multiple cultural background, skills, age and different abilities. To achieve this, it is needed to reverse the paradigm from the current belief that "the human learns how the machine works" to the future scenario in which "the machine adapts to the human capability" accommodating to her/his own time and features. This is realized by adaptively simplifying the HMI based on the user's features and complementing her/his cognitive capabilities by advanced sensing and the higher precision of machines. However, this simplification might lead to the increase of the time needed to perform a process function and the reduction of productivity due to limited functionalities enabled to low skilled users. To overcome this issue, a training facility needs to be integrated in the adaptive HMI that embeds a virtual (or augmented) environment to guide and teach the user to evolve her/his capability aiming at a more efficient process, both in terms of time and quality. The approach presented in this paper has been devised within the framework of the European project INCLUSIVE, which seeks to develop smart and adaptive interfaces for inclusive work environment.The paper is organized as follows. In Section <ref> we present a review of the state of the art on adaptive automation systems. In Section <ref> the proposed methodology is described, with a special focus on the proposed rules for adaptation in Subsection <ref>. The expected impact of the application of the proposed methodologies is investigated in Section <ref>. Finally, Section <ref> follows with some concluding remarks.§ STATE OF THE ART In human-computer interaction, the interface is what users see and work with to use a device <cit.>. In industrial scenarios, the HMI takes care of all visualizations and user's interactions with the data coming from technological processes, and thus allows the user to operate the machine, to observe the system status and, if necessary, to intervene in the process. Customarily, HMIs used in industrial process control applications provide no means to control the amount and form of information displayed during operation. While the user is flexible and adaptable, the system is not. Control systems commonly respond in the same way without regard as to whether the flow of information is low or extremely high, or the level of expertise of the user is good or bad <cit.>. As a consequence, the responsibility for the interaction is placed on the user, who has to adapt to processes determined by the technical system. Moreover, the flexibility required to deal with difficult situations must be provided by the operators alone acting under the pressure of unexpected and rapidly changing hazardous situations. This issue is even more severe if we consider that the amount of monitored data that come from modern plant processes keeps increasing and control systems are becoming more and more complex <cit.>. Therefore, automation results in working methods that demand increase with regard to stamina, time pressure and the pace of work <cit.>. This leads to detrimental effects on workers' health and safety giving rise to occupational diseases, such as stress or musculoskeletal disorders, as well as to occupational accidents <cit.>.To tackle this issue, context-dependent automation, also known as adaptive automation, has been considered <cit.>. Context awareness is the ability of programs, applications or computer devices to sense, interpret, respond and act based on the context. Context refers to any information that can be used to characterize the state of an entity, that can be a person, place, or object considered relevant to the interaction between a user and an application, including the user and applications themselves <cit.>. According to this design paradigm, levels of automation need not be fixed at the system design stage, but should be designed to vary depending on situational demands during operational use. In this regard, the distinctive feature of adaptive user interfaces is the possibility to change how the information is presented so that only relevant information is provided to users by including the environment and the user as part of the monitored system through adaptive HMIs.Adaptive user interfaces have been developed and implemented in different domains, such as automotive <cit.>, aeronautics <cit.> and smartphone and hand-held devices <cit.>. However, very few partial attempts and preliminary results on the development of adaptive HMIs for complex industrial systems have been reported <cit.>.In <cit.> a preliminary conceptual architecture is introduced that allows to defining an HMI that adapts the presentation of information based on the operator responsiveness. In <cit.> different user profiles, such as manager, supervisor and maintenance personnel, are identified and adaptation is limited to present information selectively according to the logged account. Going beyond these preliminary efforts, the methodology we propose in this paper allows for the development of a complete ecosystem of technological innovations that includes the measurement of human capabilities, the adaptation of the user interface and the training of unskilled users. § PROPOSED METHODOLOGYThe methodology presented in this paper aims at developing a smart user-machine interface that adapts the information load of the HMI and the automation capability of the machine to the physical, sensorial and cognitive capabilities of workers. The smart interface is based on three main modules, as shown in Figure <ref>: * human capabilities measurement (Measure): the smart interface measures the human capability of understanding the logical organization of information and the cognitive burden she/he can sustain (automatic human profiling). The interface identifies also the real skill level of the user analysing how she/he operates in the common working processes (e.g. measuring the time needed to move among different screens of the HMI, measuring the eyes activity in seeking information, etc.); * adapt interfaces to human capabilities (Adapt): the smart interface adapts the organization of information (e.g. the complexity of the information presented), the means of interaction (e.g. textual information, only graphics, speech, etc.), and the automation task (normal operation, adaptation to new processes, predictive maintenance, etc.) that are accessible to the user depending on her/his measured capabilities; * teaching and training for unskilled users (Teach): the smart interface is used to teach the unskilled users how to interact with the machine. Depending on the skill level of the user and the operation performed by the machine, the interface can train the user by using a step by step procedure, also supported by simulation on a virtual environment. This teaching mode can be on-line or off-line, depending on the level of automation and the criticality of the job operated by the machine or robot. Moreover, in this module, an industrial social network app (Android and iOS) is developed to facilitate the sharing of knowledge among the users about the industrial processes and the machine operational modes. Since the behaviour of the interface depends on the actual process organization and operational modes, which are specifically related to the particular industrial process under consideration, it is important to establish a general methodological approach that, then, can be specified by building customized HMIs according to applications. Thus, the design of such universal adaptation patterns leads to a core meta-HMI, which is general and dialogue-independent from hardware. This meta-HMI needs, then, to be customized to the specific application scenarios, functionalities and hardware targets of the use cases.In the following, the three modules will be presented separately, with a special focus on the adaptation module, which is the core of the system. §.§ Measurement of human capabilities The first step towards adaptation is the measurement of the individual capabilities and strain level while fulfilling operative tasks.Firstly, the effect of age (changed perception, cognition and motor skills), dyslexia, second-language speaking, disabilities (one handed operation, colour blindness, etc.), missing experience in the context and impaired abilities in acquiring knowledge are measured a priori, at the first involvement of the user with the automatic system. Then, the strain of the operator is continuously assessed in real time. To this end, contactless and body-worn sensors are used to measure several physiological indicators, such as heart rate variability and electrocardiographic activity, galvanic skin response, eye tracking, blink reflex, skin temperature, cerebral electrical activity, and adrenaline/noradrenaline levels <cit.>.§.§ Adapt interfaces to human capabilitiesResults of the measurement module are directly mapped into a suited degree of adaptation of the interface. Adaptation occurs atsensorial, cognitive and interaction levels. The proposed approach is summarized in Figure <ref>.Sensorial adaptation is meant to tackle physical, visual, auditory and dexterity impairments of users. In this regard, the first step towards adaptation consists in meeting user physical impairment mainly by varying the presentation of information, e.g., adapting font size, accompanying icons to short text description, enabling audio input and output. Although such features can be manually enabled/disabled by the user, the optimal configuration is automatically selected by the interface on the basis of user's claims and measurements. Also environmental conditions, such as lighting, noise and use of protective gloves, are considered. A deep analysis based on ergonomics factors <cit.> drives the selection of such an optimal configuration. Cognitive adaptation provides the adequate level of instructions and details in order to not exceed the cognitive capabilities of less experienced workers and increase performance of more experienced workers. It is implemented in terms of: 1) amount of information presented, 2) guided interaction with the productive system, and 3) amount of functionalities enabled to users. Displayed information is adapted according to two factors: user's experience in the task to accomplish and experience in the use of the HMI (e.g. novel, occasional or habitual user).In the presence of inexperienced workers, it is considered useful also providing an extended tutorial concerning the description of machine functionalities and/or the use of the HMI (e.g. interaction mode, details on icons, menus, setting of preferences, etc.). This tutorial can be easily accessed while the machine is running and represents a brief and easy to access summary of the teaching module. Novel or occasional workers are provided with a brief description of each button in the interface when the user moves over it with a mouse or finger, depending on the input device. This feature is useful also for elderly users having a long-time experience in the task but limited computer alphabetization, thus being unfamiliar with computer jargon. Conversely, the feature is not enabled in the case of experienced workers, since continuous and ubiquitous explanations are superfluous and would slow down the interaction with the system. Furthermore, depending on worker's familiarity with the interface and the machine, as from findings of the measurement module, information on the machine and the whole process chain are selectively presented to users. Expert workers have access to global view of the process, enriched with information on production rates and trends, levels of input raw material, and due maintenance activities. Additionally, an overview of the plant (or a subpart) is provided by means of an interactive map reporting information about alarms, failures, ongoing tasks and production rates. As regards alarms, experienced users are provided with the detailed list of all active alarms, including those warnings that do not stop production. Conversely, adaptation for less experienced users prescribes a more restricted view of the process, focused on the activity the user is currently performing. Alarms are filtered so that only the most severe ones are shown to the worker, together with a detailed description of causes and actions to take, supported by pictures, videos and technical drawings of the machine. If necessary, secondary alarms are shown at the end of the working day so that the worker can ask for assistance to solve open issues and restore the correct machine status.Additionally, for users with limited experience in the process or task to be accomplished, the HMI needs to guide the worker in the task. To this end, predefined working recipes (e.g. default values of parameters, working strategies or combination of working parameters) are presented by the interface, covering the range of strategies that can be implemented on the machine. In this way, the cognitive gap between the worker and the system is covered by the interface, and the comprehensibility of the interface is improved. To support the concept of adaptive automation systems, further adaptation in the HMI is implemented conditioning the accessible functionalities of the machine to user's experience. Indeed, production is adapted to workers' capabilities by disabling most advanced functionalities for workers with limitations due to inexperience or disability, with the goal to decrease mental workload. For example, considering the scenario of an old worker not familiar with the process, exploiting the measurement module, the HMI will automatically disable complex or unusual tasks that require many inputs and are not supported by established recipes. §.§ Teaching and training for unskilled users The last pillar of the proposed approach is an adaptive teaching system that trains the user according to her/his capabilities, identified in the measurement module, and understanding of the working system. Teaching is provided to unskilled users both off-line, before starting a working session, and on-line. The off-line training helps the user to get familiar with the automatic system and learn the task to perform. This is done in a virtual environment replicating the real scenario and working situations. At this stage, the received training is tailored to meet the measured user's capabilities and mental model. Additionally, while the process is ongoing, the user receives additional on-line training that provides guidance in the use of the machine or the robot by means of augmented reality <cit.>. This module adapts the training level also to the current understanding of the process, assessed, for example, by tacking user's errors and eye.Additionally, the teaching module hosts an industrial social system, providing a contextual help menu that broadcasts a request for help using a social network media (e.g. app for iOS or Android). In the case of a problem, the operator can contact other qualified experts within the company or, in case that no sufficient help can be provided by them, further experts, e.g. from the plant manufacturer, can be contacted easily. The request is sent to the local workers community or the service level, who has the app installed on her/his mobile device. The HMI augments the message with the detailed context of use of the machine, in order to facilitate the expert to correctly address the problem experienced by the local inexperienced operator. The system routes automatically the request to the users, who qualify at app login as experts in that particular task/machine function. The experts can then respond using a message or voice call to support the unskilled user.§ EXPECTED RESULTS The results of the approach proposed in this paper will be measured at the end of the INCLUSIVE project on three different industrial use cases which come from three different market sectors and address different user groups: young, elder artisans, seasonal workmen, people with low level of education, people with certified limited cognitive abilities and physical impairments ranging from mild to severe. In particular, a use case refers to a large bottling company, with automatic filling and packaging machines, whose employees include people with certified limited cognitive abilities and physical impairments. The second use case is a company producing woodworking machines for artisan shops and small companies. Thus, the final users of the HMI are elder subjects with low education level and computer alphabetization. Finally, the last use case aims at matching a system integrator for robotic applications to a manufacturing company producing machines for bending metal parts and components, which are currently manually fed mainly because of the variability of the process itself and the lack of skilled personnel, able to manage automatic machines or robots.For the time being, the expected impacts for the increased customization, flexibility optimization of the production and the widest acceptance of automation technologies have been investigated. They are summarized in Fig. <ref>.§.§ Effect on customization of manufacturing processesThe results of the approach proposed in this paper are expected to have a significant impact on the customization of manufacturing processes, guaranteeing the possibility of introducing significant levels of customization in the products and in the production processes. This will be achieved thanks to the developed smart HMI that will adapt its behaviour at run time, accommodating time-variable needs together with the users' capabilities.In particular, the HMI developed according to the proposed methodology will make it possible to introduce high levels of customization in manufacturing process machines, while reducing the complexity of the interaction to a sufficiently low level, to enable also non-specialized personnel and operators with disabilities or with low education levels to effectively interact with the machines.Moreover, as mentioned above for the third use case of the INCLUSIVE project, several manufacturing processes are mainly performed in a manual manner nowadays, due to the high variability of the production batches. Despite the availability of automatic machines able to perform such operations, their potential is often limited by the inability of human operators to interact with such complex systems. In this scenario, the availability of such adaptive user interfaces, which support also on-line and off-line training of operators, allows them to effectively utilize automatic machines. §.§ Effect on productivity of manufacturing processesAdditionally, we expect that the application of the proposed methodology to the design of adaptive HMIs will have a significant impact on the productivity of the overall manufacturing processes. Indeed, the performance of the operators, in particular of elderly, inexperienced or disabled ones will be significantly improved since they will be able to deal with complex machines and production systems in a profitable manner.In particular, the proposed inclusive HMI is expected to allow a significant reduction of the time needed to complete each production task and the down-time for adaptation of robotic cells or automatic machines to a variation of the production, and increase of the overall line productivity, in terms of overall equipment effectiveness. §.§ Social impact on employment and working conditionsSome categories of workers are widely recognized as particularly vulnerable, specifically in the presence of a worldwide economic crisis. Old employees, low educated people, and disabled people fall among those categories. The vulnerability of those people is related to the fact that they are the most likely to lose their job, and the less likely to be re-trained and re-employed. This is due either to the difficulty in effectively utilizing complex modern computer aided manufacturing equipment, or to physical impairment that prevents some kinds of activities.A significant impact on the employment of elderly, low educated and disabled people is expected to result from the application of the proposed methodology. The resulting adaptive HMI, in fact, will automatically adapt to the skills of the current user, supporting the initial (off-line and on-line) training phase, and letting each user reach high levels of productivity in a short time. This will significantly reduce the risk for those people to lose their job due to lack of specific skills. At the same time, in case of loss of the job, it will increase the possibility of re-employment, since the re-training phase is significantly reduced.Furthermore, as is well known, very often people refuse innovation and automation. Main reasons are related to fear for technological unemployment (i.e. loss of a job due to a technological change) and, more in general, to difficulties in adapting to new technologies and procedures. However, technological innovation is mandatory for achieving the constantly increasing productivity and quality requirements. The proposed methodology will have a significant impact on the acceptability of automatic machines and robotic cells in traditional production lines. In fact, providing a smart HMI that will adapt to the current user's skill will help in making the users comfortable with innovative technologies and procedures.We expect that the proposed methodology will have an impact related to employment and working conditions in terms of usability and cognitive load. Usability will be evaluated based on surveys that will monitor the degree of satisfaction of users comparing traditional HMIs with the smart HMI developed according the proposed approach. The cognitive load will be computed by non-invasive measurement of different physiological quantities, such as heart rate, blood pressure or pupillary response. §.§ Impact on the market for automatic machines and robotic cells As detailed above, we expect a significant impact in the capability, for elderly, low educated or disabled operators, to profitably utilize advanced (and complex) automatic machines and robotic cells. This increased level of acceptability will open new market opportunities, for producers and integrators of automatic machines and robotic cells. In particular, the smart HMI system designed according to the proposed approach will open new market opportunities for automatic machines and robotic cells in traditionally hostile manufacturing environments, such as SMEs and artisan workshops.§.§ Impact on the market for HMI systemsAccording to a report published at the end of 2015 <cit.>, the value of the worldwide HMI market is estimated to reach US$5,579.3 by 2019, expanding at a CAGR (Compounded Average Growth Rate) of 10.4% during the period from 2013 to 2019. According to this report, one of the key factors in the growth of HMI market is to be found in the high rate of development in industrial automation: in fact, complex automatic machines and robotic cells require modern HMI systems to be effectively utilized by non-specialized workers in an useful manner. The market of HMIs is composed of different items: touchscreens or displays, industrial PCs, interface software, and various other controllers. Among these, the market for interface software leads the global HMI market at present: analysts project this market to report the fastest growth during the forecast period.The application of the proposed methodology will further push this positive trend, as a consequence of the increasing market opportunities for automatic machines and robotic cells. § CONCLUSIONSIn this paper, we presented a methodology for the design of adaptive human-centred HMIs for industrial machines and robots. The interfaces developed according to the proposed approach adapt the information presented to the user and its visualization to the user's capabilities and strain level. Thus, they allow for inclusive and flexible working environments accessible to any kind of operator, regardless of age, education level, cognitive and physical impairments and experience in the tasks to be performed. Additionally, the proposed approach considers a teaching module that adaptively provides training to unskilled users on the basis of their capabilities and actual understanding of the working scenario.The approach presented in this paper has been devised within the framework of the European project INCLUSIVE, which is ongoing. Thus, the results of the proposed methodology will be measured at the end of the INCLUSIVE project on three different industrial use cases. For the time being, the expected impact for the increased customization, flexibility optimization of the production and the widest acceptance of automation technologies is investigated in this paper. § ACKNOWLEDGEMENTThis work has been supported by the INCLUSIVE collaborative project, which has received funding from the European Union's Horizon 2020 Research and Innovation Programme under grant agreement No 723373.IEEEtran
http://arxiv.org/abs/1706.08467v1
{ "authors": [ "Valeria Villani", "Lorenzo Sabattini", "Julia N. Czerniak", "Alexander Mertens", "Birgit Vogel-Heuser", "Cesare Fantuzzi" ], "categories": [ "cs.HC" ], "primary_category": "cs.HC", "published": "20170626163810", "title": "Towards Modern Inclusive Factories: A Methodology for the Development of Smart Adaptive Human-Machine Interfaces" }
Fast Optimization Algorithms and the Cosmological Constant Brad Lackey June 23, 2017 ========================================================== This paper presents a margin-based multiclass generalization bound for neural networks that scales with their margin-normalized spectral complexity: their Lipschitz constant, meaning the product of the spectral norms of the weight matrices, times a certain correction factor. This bound is empirically investigated for a standard AlexNet network trained with SGD on the and datasets, with both original and random labels;the bound, the Lipschitz constants, and the excess risks are all in direct correlation, suggesting both that SGD selects predictors whose complexity scales with the difficulty of the learning task, and secondly that the presented bound is sensitive to this complexity.§ OVERVIEWNeural networks owe their astonishing success not only to their ability to fit any data set: they also generalize well, meaning they provide a close fit on unseen data. A classical statistical adage is that models capable of fitting too much will generalize poorly; what's going on here?Let's navigate the many possible explanations provided by statistical theory. A first observation is that any analysis based solely on the number of possible labellings on a finite training set — as is the case with VC dimension — is doomed: if the function class can fit all possible labels (as is the case with neural networks in standard configurations <cit.>), then this analysis can not distinguish it from the collection of all possible functions!Next let's consider scale-sensitive measures of complexity, such as Rademacher complexity and covering numbers, which (can) work directly with real-valued function classes, and moreover are sensitive to their magnitudes. <Ref> plots the excess risk (the test error minus the training error) across training epochs against one candidate scale-sensitive complexity measure, the Lipschitz constant of the network (the product of the spectral norms of the weight matrices), and demonstrates that they are tightly correlated (which is not the case for, say, the l_2 norm of the weights). The data considered in <Ref> is the standard dataset, both with original and with random labels, which has been used as a sanity check when investigating neural network generalization <cit.>. There is still an issue with basing a complexity measure purely on the Lipschitz constant (although it has already been successfully employed to regularize neural networks <cit.>): as depicted in <Ref>, the measure grows over time, despite the excess risk plateauing. Fortunately, there is a standard resolution to this issue: investigating the margins (a precise measure of confidence) of the outputs of the network. This tool has been used to study the behavior of 2-layer networks, boosting methods, SVMs, and many others <cit.>; in boosting, for instance, there is a similar growth in complexity over time (each training iteration adds a weak learner), whereas margin bounds correctly stay flat or even decrease. This behavior is recovered here: as depicted in <Ref>, even though standard networks exhibit growing Lipschitz constants, normalizing these Lipschitz constants by the margin instead gives a decaying curve. §.§ Contributions This work investigates a complexity measure for neural networks that is based on the Lipschitz constant, but normalized by the margin of the predictor. The two central contributions are as follows. * <Ref> below will give the rigorous statement of the generalization bound that is the basis of this work. In contrast to prior work, this bound: (a) scales with the Lipschitz constant (product of spectral norms of weight matrices) divided by the margin; (b) has no dependence on combinatorial parameters (e.g., number of layers or nodes) outside of log factors; (c) is multiclass (with no explicit dependence on the number of classes); (d) measures complexity against a reference network (e.g., for the ResNet <cit.>, the reference network has identity mappings at each layer). The bound is stated below, with a general form and analysis summary appearing in <Ref> and the full details relegated to the appendix. * An empirical investigation, in <Ref>, of neural network generalization on the standard datasets , , and using the preceding bound. Rather than using the bound to provide a single number, it can be used to form a margin distribution as in <Ref>. These margin distributions will illuminate the following intuitive observations: (a) is harder than ; (b) random labels make and much more difficult; (c) the margin distributions (and bounds) converge during training, even though the weight matrices continue to grow; (d) l_2 regularization (“weight decay”) does not significantly impact margins or generalization.A more detailed description of the margin distributions is as follows.Suppose a neural network computes a function f:^d →^k, where k is the number of classes; the most natural way to convert this to a classifier is to select the output coordinate with the largest magnitude, meaning x ↦_j f(x)_j. The margin, then, measures the gap between the output for the correct label and other labels, meaning f(x)_y - max_j≠ y f(x)_j.Unfortunately, margins alone do not seem to say much; see for instance <Ref>, where the collections of all margins for all data points — the unnormalized margin distribution — are similar for with and without random labels. What is missing is an appropriate normalization, as in <Ref>. This normalization is provided by <Ref>, which can now be explained in detail.To state the bound, a little bit of notation is necessary. The networks will use L fixed nonlinearities (σ_1,…, σ_L), where σ_i:^d_i-1→^d_i is ρ_i-Lipschitz (e.g., as with coordinate-wise ReLU, and max-pooling, as discussed in <Ref>); occasionally, it will also hold that σ_i(0) = 0. Given L weight matrices = (A_1,…,A_L) let F_ denote the function computed by the corresponding network:F_(x) := σ_L(A_Lσ_L-1(A_L-1⋯σ_1(A_1 x)⋯)).The network output F_(x)∈^d_L (with d_0=d and d_L=k) is converted to a class label in {1,…,k} by taking the max over components, with an arbitrary rule for breaking ties. Whenever input data x_1,…,x_n∈^d are given, collect them as rows of a matrix X ∈^n× d. Occasionally, notation will be overloaded to discuss F_(X^T), a matrix whose i^ column is F_(x_i). Let W denote the maximum of {d,d_1,…,d_L}. The l_2 norm ·_2 is always computed entry-wise; thus, for a matrix, it corresponds to the Frobenius norm.Next, define a collection of reference matrices (M_1,…,M_L) with the same dimensions as A_1,…,A_L; for instance, to obtain a good bound for ResNet <cit.>, it is sensible to set M_i := I, the identity map, and the bound below will worsen as the network moves farther from the identity map; for AlexNet <cit.>, the simple choice M_i=0 suffices. Finally, let ·_σ denote the spectral norm, and let ·_p,q denote the (p,q) matrix norm, defined by *A_p,q :=(A_:,1_p, …, A_:,m_p)_q for A∈^d×m. The spectral complexity R_F_ = R_ of a network F_ with weightsis the defined asR_ := ∏_i=1^L ρ_i A_i_σ∑_i=1^L A_i^⊤ - M_i^⊤_2,1^2/3/A_i_σ^2/3^3/2. The following theorem provides a generalization bound for neural networks whose nonlinearities are fixed but whose weight matriceshave bounded spectral complexity R_.Let nonlinearities (σ_1,…,σ_L) and reference matrices (M_1,…, M_L) be given as above (i.e., σ_i is ρ_i-Lipschitz and σ_i(0) = 0). Then for (x,y),(x_1,y_1),…,(x_n,y_n) drawn iid from any probability distribution over ^d×{1,…,k}, with probability at least 1-δ over ((x_i,y_i))_i=1^n, every margin γ > 0 and network F_ : ^d →^k with weight matrices = (A_1,…,A_L) satisfy[2]_j F_(x)_j ≠ y ≤_γ(F_) + X_2 R_/γ nln(W) + √(ln(1/δ)/n),where _γ(f) ≤ n^-1∑_if(x_i)_y_i≤γ + max_j≠ y_i f(x_i)_j and X_2 = √(∑_i x_i_2^2 ).The full proof and a generalization beyond spectral norms is relegated to the appendix, but a sketch is provided in <Ref>, along with a lower bound. <Ref> also gives a discussion of related work: briefly, it's essential to note that margin and Lipschitz-sensitive bounds have a long history in the neural networks literature <cit.>; the distinction here is the sensitivity to the spectral norm, and that there is no explicit appearance of combinatorial quantities such as numbers of parameters or layers (outside of log terms, and indices to summations and products).To close, miscellaneous observations and open problems are collected in <Ref>. § GENERALIZATION CASE STUDIES VIA MARGIN DISTRIBUTIONSIn this section, we empirically study the generalization behavior of neural networks, via margin distributions and the generalization bound stated in <Ref>.Before proceeding with the plots, it's a good time to give a more refined description of the margin distribution, one that is suitable for comparisons across datasets. Given n pattern/label pairs ((x_i,y_i))_i=1^n, with patterns as rows of matrix X∈^n× d, and given a predictor F_:^d→^k, the (normalized) margin distribution is the univariate empirical distribution of the labeled data points each transformed into a single scalar according to(x,y) ↦F_(x)_y - max_i≠ yF_(x)_i/R_X_2/n,where the spectral complexity R_ is from <ref>. The normalization is thus derived from the bound in <Ref>, but ignoring log terms. Taken this way, the two margin distributions for two datasets can be interpreted as follows.Considering any fixed point on the horizontal axis, if the cumulative distribution of one density is lower than the other, then it corresponds to a lower right hand side in <Ref>. For no reason other than visual interpretability, the plots here will instead depict a density estimate of the margin distribution. The vertical and horizontal axes are rescaled in different plots, but the random and true margin distributions are always the same. A little more detail about the experimental setup is as follows. All experiments were implemented in Keras <cit.>. In order to minimize conflating effects of optimization and regularization, the optimization method was vanilla SGD with step size 0.01, and all regularization (weight decay, batch normalization, etc.) were disabled.“” in general refers to , however will also be explicitly mentioned.The network architecture is essentially AlexNet <cit.> with all normalization/regularization removed, and with no adjustments of any kind (even to the learning rate) across the different experiments.Comparing datasets.A first comparison is of and the standard digit data.is considered “easy”, since any of a variety of methods can achieve roughly 1% test error. The “easiness” is corroborated by <Ref>, where the margin distribution for places all its mass far to the right of the mass for .Interestingly, randomizing the labels of , as in <Ref>, results in a margin distribution to the left of not only , but also slightly to the left of (but close to) with randomized labels.Next, <Ref> compares and , where uses the same input images as ; indeed, is obtained from by collapsing the original 100 categories into 10 groups.Interestingly, , from the perspective of margin bounds, is just as difficult as with random labels.This is consistent with the large observed test error on (which has not been “optimized” in any way via regularization).Lastly, <Ref> replaces the input images with random images sampled from Gaussians matching the first- and second-order image statistics (see <cit.> for similar experiments). Convergence of margins.As was pointed out in <Ref>, the weights of the neural networks do not seem to converge in the usual sense during training (the norms grow continually).However, as depicted in <Ref>, the sequence of (normalized) margin distributions is itself converging.Regularization.As remarked in <cit.>, regularization only seems to bring minor benefits to test error (though adequate to be employed in all cutting edge results). This observation is certainly consistent with the margin distributions in <Ref>, which do not improve (e.g., by shifting to the right) in any visible way under regularization.An open question, discussed further in <Ref>, is to design regularization that improves margins.§ ANALYSIS OF MARGIN BOUNDThis section will sketch the proof of <Ref>, give a lower bound, and discuss related work. §.§ Multiclass margin boundThe starting point of this analysis is a margin-based bound for multiclass prediction. To state the bound, first recall that the margin operator :^k×{1,…,k}→ is defined as (v,y) := v_y - max_i≠ y v_i, and define the ramp loss ℓ_γ:→^+ asℓ_γ(r) :=0r< -γ, 1 + r/γ r ∈ [-γ,0], 1 r > 0,and ramp risk as _γ(f) := (ℓ_γ(-(f(x),y))). Given a sample S:=((x_1,y_1),…,(x_n,y_n)), define an empirical counterpart _γ of _γ as _γ(f) := n^-1∑_i ℓ_γ(-(f(x_i), y_i)); note that _γ and _γ respectively upper bound the probability and fraction of errors on the source distribution and training set. Lastly, given a set of real-valued functions , define the Rademacher complexity as (_|S) := n^-1sup_h ∈∑_i=1^n_i h(x_i,y_i), where the expectation is over the Rademacher random variables (_1,…,_n), which are iid with [_1 = 1] = [_1 = -1] = 1/2.With this notation in place, the basic bound is as follows. Given functionswith ∋ f : ^d →^k and any γ>0,define _γ :=(x,y) ↦ℓ_γ(-(f(x), y)):f∈. Then, with probability at least 1-δover a sample S of size n,every f∈ satisfies [ _i f(x)_i ≠ y ]≤_γ(f)+2 ((_γ)_|S)+ 3 √(ln(1/δ)/2n). This bound is a direct consequence of standard tools in Rademacher complexity. In order to instantiate this bound, covering numbers will be used to directly upper bound the Rademacher complexity term ((_γ)_|S). Interestingly, the choice of directly working in terms of covering numbers seems essential to providing a bound with no explicit dependence on k; by contrast, prior work primarily handles multiclass via a Rademacher complexity analysis on each coordinate of a k-tuple of functions, and pays a factor of √(k) <cit.>. §.§ Covering number complexity upper bounds This subsection proves <Ref> via <Ref> by controlling, via covering numbers, the Rademacher complexity ((_γ)_|S) for networks with bounded spectral complexity.The notation here for (proper) covering numbers is as follows. Let (U, , ·) denote the least cardinality of any subset V⊆ U that covers U at scalewith norm ·, meaningsup_A∈ Umin_B∈ VA-B≤.Choices of U that will be used in the present work include both the image _|S of data S under some function class , as well as the conceptually simpler choice of a family of matrix products.The full proof has the following steps. (I) A matrix covering bound for the affine transformation of each layer is provided in <Ref>; handling whole layers at once allows for more flexible norms. (II) An induction on layers then gives a covering number bound for entire networks; this analysis is only sketched here for the special case of norms used in <Ref>, but the full proof in the appendix culminates in a bound for more general norms (cf. <Ref>). (III) The preceding whole-network covering number leads to <Ref> via <Ref> and standard techniques.Step (I), matrix covering, is handled by the following lemma.The covering number considers the matrix product XA, where A will be instantiated as the weight matrix for a layer, and X is the data passed through all layers prior to the present layer.Let conjugate exponents (p,q) and (r,s) be given with p ≤ 2, as well as positive reals (a,b,) and positive integer m. Let matrix X ∈^n× d be given with X_p≤ b. ThenlnXA : A∈^d× m, A_q,s≤ a, , ·_2≤⌈a^2 b^2 m^2/r/^2⌉ln(2dm). The proof relies upon the Maurey sparsification lemma <cit.>, which is stated in terms of sparsifying convex hulls, and in its use here is inspired by covering number bounds for linear predictors <cit.>. To prove <Ref>, this matrix covering bound will be instantiated for the case of *A_2,1. It is possible to instead scale with A_2 and X_2, but even for the case of the identity matrix X=I, this incurs an extra dimension factor. The use of A_2,1 here thus helps <Ref> avoid any appearance of W and L outside of log terms; indeed, the goal of covering a whole matrix at a time (rather than the more standard vector covering) was to allow this greater sensitivity and avoid combinatorial parameters.Step (II), the induction on layers, proceeds as follows. Let X_i denote the output of layer i but with images of examples of columns (thus X_0 = X^⊤), and inductively suppose there exists a cover element X_i for X_i which depends on covering matrices (A_1,…,A_i-1) chosen to cover weight matrices in earlier layers. Thanks to <Ref>, there also exists A_i so that A_i X_i - A_i X_i _2 ≤_i. The desired cover element is thus X_i+1 = σ_i(A_i X_i) where σ_i is the nonlinearity in layer i; indeed, supposing σ_i is ρ_i-Lipschitz,X_i+1 - X_i+1_2≤ρ_i A_iX_i - A_i X_i_2 ≤ρ_i A_iX_i - A_iX_i_2 + A_iX_i - A_i X_i_2≤ρ_i A_i_σX_i - X_i_2 + ρ_i _i,where the first term is controlled with the inductive hypothesis. Since X_i+1 depends on each choice (A_i,…,A_i), the cardinality of the full network cover is the product of the individual matrix covers.The preceding proof had no sensitivity to the particular choice of norms; it merely required an operator norm on A_i, as well as some other norm that allows matrix covering. Such an analysis is presented in full generality in <Ref>. Specializing to the particular case of spectral norms and (2,1) group norms leads to the following full-network covering bound.Let fixed nonlinearities (σ_1,…,σ_L) and reference matrices (M_1,…, M_L) be given, where σ_i is ρ_i-Lipschitz and σ_i(0)=0. Let spectral norm bounds (s_1,…,s_L), and matrix (2,1) norm bounds (b_1,…,b_L) be given. Let data matrix X∈^n× d be given, where the n rows correspond to data points. Let _X denote the family of matrices obtained by evaluating X with all choices of network F_:_X :=F_(X^T) := (A_1,…,A_L),A_i_σ≤ s_i,A_i^⊤ - M_i^⊤_2,1≤ b_i ,where each matrix has dimension at most W along each axis. Then for any >0, ln(_X, , ·_2) ≤X^2_2 ln(2W^2)/^2∏_j=1^L s_j^2ρ_j^2 ∑_i=1^L b_i/s_i^2/3^3. What remains is (III): <Ref> can be combined with the standard Dudley entropy integral upper bound on Rademacher complexity (see e.g. <cit.>), which together with <Ref> gives <Ref>.§.§ Rademacher complexity lower bounds By reduction to the linear case (i.e., removing all nonlinearities), it is easy to provide a lower bound on the Rademacher complexity of the networks studied here.Unfortunately, this bound only scales with the product of spectral norms, and not the other terms in R_ (cf. <ref>).Consider the setting of <Ref>, but all nonlinearities are the ReLU z↦max{0,z}, the output dimension is d_L=1, and all non-output dimensions are at least 2 (and hence W≥ 2). Let data S:= (x_1,…,x_n) be collected into data matrix X ∈^n× d. Then there is a c such that for any scalar r>0,[3][2] F_ := (A_1,…,A_L), ∏_i A_i_σ≤ r _|S≥ c*X_2r.Note that, due to the nonlinearity, the lower bound should indeed depend on ∏_i A_i_σ and not ∏_i A_i_σ; as a simple sanity check, there exist networks for which the latter quantity is 0, but the network does not compute the zero function. §.§ Related workTo close this section on proofs, it is a good time to summarize connections to existing literature.The algorithmic idea of large margin classifiers was introduced in the linear case by <cit.> (see also <cit.>). <cit.> gave an intuitive explanation of the performance of these methods based on a sample-dependent VC-dimension calculation, but without generalization bounds. The first rigorous generalization bounds for large margin linear classifiers <cit.> required a scale-sensitive complexity analysis of real-valued function classes. At the same time, a large margin analysis was developed for two-layer networks <cit.>, indeed with a proof technique that inspired the layer-wise induction used to prove <Ref> in the present work. Margin theory was quickly extended to many other settings (see for instance the survey by <cit.>), one major success being an explanation of the generalization ability of boosting methods, which exhibit an explicit growth in the size of the function class over time, but a stable excess risk <cit.>. The contribution of the present work is to provide a margin bound (and corresponding Rademacher analysis) that can be adapted to various operator norms at each layer. Additionally, the present work operates in the multiclass setting, and avoids an explicit dependence on the number of classes k, which seems to appear in prior work <cit.>.There are numerous generalization bounds for neural networks, including VC-dimension and fat-shattering bounds (many of these can be found in <cit.>). Scale-sensitive analysis of neural networks started with <cit.>, which can be interpreted in the present setting as utilizing data norm ·_∞ and operator norm ·_∞→∞ (equivalently, the norm A_i^⊤_1,∞ on weight matrix A_i). This analysis can be adapted to give a Rademacher complexity analysis <cit.>, and has been adapted to other norms <cit.>, although the ·_∞ setting appears to be necessary to avoid extra combinatorial factors. More work is still needed to develop complexity analyses that have matching upper and lower bounds, and also to determine which norms are well-adapted to neural networks as used in practice.The present analysis utilizes covering numbers, and is most closely connected to earlier covering number bounds <cit.>, themselves based on the earlier fat-shattering analysis <cit.>, however the technique here of pushing an empirical cover through layers is akin to VC dimension proofs for neural networks <cit.>. The use of Maurey's sparsification lemma was inspired by linear predictor covering number bounds <cit.>.Comparison to preprint. The original preprint of this paper <cit.> featured a slightly different version of the spectral complexity R_, given by ∏_i=1^L ρ_i A_i_σ∑_i=1^L A_i - M_i_1^2/3/A_i_σ^2/3^3/2. In the present version eq:spec_comp, each A_i - M_i_1 term is replaced by A_i^⊤ - M_i^⊤_2,1. This is a strict improvement since for any matrix A∈^d×m one has *A_2,1≤*A_1, and in general the gap between these two norms can be as large as √(d).On a related note, all of the figures in this paper use the _1 norm in the spectral complexity R_ instead of the (2,1) norm. Variants of the experiments described in sec:empirical were carried out using each of the l_1, (2,1), and l_2 norms in the (∑_i=1^L(·)^2/3)^3/2 term with negligible difference in the results.Since spectrally-normalized margin bounds were first proposed in the preprint <cit.>, subsequent works <cit.>re-derived a similar spectrally-normalized bound using the PAC-Bayes framework. Specifically, these works showed that R_ may be replaced (up to ln(W) factors) by: [1]∏_i=1^L ρ_i A_i_σ· L[1]∑_i=1^L (√(W)A_i - M_i_2)^2/A_i_σ^2^1/2. Unfortunately, this bound never improves on <Ref>, and indeed can be derived from it as follows. First, the dependence on the individual matrices A_i in the second term of this bound can be obtained from <Ref> since for any A∈^d×m it holds that *A^⊤_2,1≤√(d)*A_2. Second, the functional form (∑_i=1^L(·)^2/3)^3/2 appearing in <Ref> may be replaced by the form L(∑_i=1^L(·)^2)^1/2 appearing above by using *α_2/3≤L*α_2 which holds for any α∈^L (and can be proved, for instance, with Jensen's inequality).§ FURTHER OBSERVATIONS AND OPEN PROBLEMS Adversarial examples. Adversarial examples are a phenomenon where the neural network predictions can be altered by adding seemingly imperceptible noise to an input <cit.>. This phenomenon can be connected to margins as follows.The margin is nothing more than the distance an input must traverse before its label is flipped; consequently, low margin points are more susceptible to adversarial noise than high margin points. Concretely, taking the 100 lowest margin inputs from and adding uniform noise at scale 0.15 yielded flipped labels on 5.86% of the images, whereas the same level of noise on high margin points yielded 0.04% flipped labels. Can the bounds here suggest a way to defend against adversarial examples? Regularization. It was observed in <cit.> that explicit regularization contributes little to the generalization performance of neural networks.In the margin framework, standard weight decay (l_2) regularization seemed to have little impact on margin distributions in <Ref>. On the other hand, in the boosting literature, special types of regularization were developed to maximize margins <cit.>; perhaps a similar development can be performed here? SGD. The present analysis applies to predictors that have large margins; what is missing is an analysis verifying that SGD applied to standard neural networks returns large margin predictors! Indeed, perhaps SGD returns not simply large margin predictors, but predictors that are well-behaved in a variety of other ways that can be directly translated into refined generalization bounds. Improvements to <Ref>. There are several directions in which <Ref> might be improved. Can a better choice of layer geometries (norms) yield better bounds on practical networks? Can the nonlinearities' worst-case Lipschitz constant be replaced with an (empirically) averaged quantity? Alternatively, can better lower bounds rule out these directions? Rademacher vs. covering. Is it possible to prove <Ref> solely via Rademacher complexity, with no invocation of covering numbers? §.§ AcknowledgementsThe authors thank Srinadh Bhojanapalli, Ryan Jian, Behnam Neyshabur, Maxim Raginsky, Andrew J. Risteski, and Belinda Tzen for useful conversations and feedback. The authors thank Ben Recht for giving a provocative lecture at the Simons Institute, stressing the need for understanding of both generalization and optimization of neural networks. M.T. and D.F. acknowledge the use of a GPU machine provided by Karthik Sridharan and made possible by an NVIDIA GPU grant. D.F. acknowledges the support of the NDSEG fellowship. P.B. gratefully acknowledges the support of the NSF through grant IIS-1619362 and of the Australian Research Council through an Australian Laureate Fellowship (FL110100281) and through the ARC Centre of Excellence for Mathematical and Statistical Frontiers. The authors thank the Simons Institute for the Theory of Computing Spring 2017 program on the Foundations of Machine Learning. Lastly, the authors are grateful to La Burrita (both the north and the south Berkeley campus locations) for upholding the glorious tradition of the California Burrito.plainnat § PROOFS This appendix collects various proofs omitted from the main text. §.§ Lipschitz properties of ReLU and max-pooling nonlinearitiesThe standard ReLU (“Rectified Linear Unit”) is the univariate mapping(r) := max{0, r}.When applied to a vector or a matrix, it operates coordinate-wise. While the ReLU is currently the most popular choice of univariate nonlinearity, another common choice is the sigmoid r↦ 1 / (1+exp(-r)). More generally, these univariate nonlinearities are Lipschitz, and this carries over to their vector and matrix forms as follows.If σ : ^d→^d is ρ-Lipschitz along every coordinate, then it is ρ-Lipschitz according to ·_p for any p≥ 1. for any z,z'∈^d,σ(z) - σ(z')_p = ∑_i |σ(z)_i - σ(z')_i|^p ^1/p≤∑_i ρ^p|z_i - z'_i|^p ^1/p = ρz - z'_p.Define a max-pooling operatoras follows. Given an input and output pair of finite-dimensional vector spacesand ' (possibly arranged as matrices or tensors), the max-pooling operator iterates over a collection of sets of indices(whose cardinality is equal to the dimension of '), and for each element of Z_i ∈ sets the corresponding coordinate i in the output to the maximum entry of the input over Z_i: given T ∈,(T)_i := max_j ∈ Z_i T_j.The following Lipschitz constant of pooling operators will depend on the number of times each coordinate is accessed across elements of ; when this operator is used in computer vision, the number of times is typically a small constant, for instance 5 or 9 <cit.>. Suppose that each coordinate j of the input appears in at most m elements of the collection Z. Then the max-pooling operatoris m^1/p-Lipschitzwrt ·_p for any p≥ 1. In particular, the max-pooling operator is 1-Lipschitz whenever Z forms a partition. Let T,T' ∈ be given. First consider any fixed set of indices Z ∈, and suppose without loss of generality that (T)_Z = max_j ∈ Z T_j ≥max_j ∈ Z T'_j. Then|(T)_Z - (T')_Z|^p = min_j' ∈ Zmax_j ∈ Z T_j - T'_j'^p ≤max_j ∈ Z T_j - T'_j ^p ≤∑_j ∈ Z T_j - T'_j ^p.Consequently,(T) - (T')_p = ∑_i |(T)_i - (T')_i|^p ^1/p = ∑_Z ∈ |(T)_Z - (T')_Z|^p ^1/p≤∑_Z ∈∑_j ∈ Z |T_j - T'_j|^p ^1/p = ∑_j∑_Z∈Z:j∈Z |T_j - T'_j|^p ^1/p≤ m∑_j|T_j - T'_j|^p ^1/p = m^1/pT - T'_p. §.§ Margin properties in <Ref> The goal of this subsection is to prove the general margin bound in <Ref>. To this end, it is first necessary to establish a few properties of the margin operator (v,j) := v_j - max_i≠ j v_i and of the ramp loss ℓ_λ.For every j and every p≥ 1, (·, j) is 2-Lipschitz wrt ·_p. Let v,v',j be given, and suppose (without loss of generality) (v,j) ≥(v',j). Choose coordinate i≠ j so that (v',j) = v'_j - v'_i. Then(v,j) - (v',j)=v_j - max_l≠ j v_j-v'_j - v'_i= v_j - v'_j + v'_i + min_l ≠ j(-v_l) ≤ v_j - v'_j + v'_i -v_i ≤ 2v-v'_∞≤ 2v-v'_p.Next, recall the definition of the ramp lossℓ_γ(r) :=0r< -γ, 1 + r/γ r ∈ [-γ,0], 1 r > 0,and of the ramp risk_γ(f) := (ℓ_γ(-(f(x),y))).(These quantities are standard; see for instance <cit.>.)For any f : ^d →^k and every γ>0,[ _i f(x)_i ≠ y] ≤[ (f(x),y) ≤ 0 ] ≤_γ(f),where thefollows any deterministic tie-breaking strategy.[ _i f(x)_i ≠ y]≤[ max_i≠ y f(x)_i ≥ f(x)_y] = [ -(f(x),y) ≥ 0 ] = [ -(f(x),y) ≥ 0 ] ≤ℓ_γ(-(f(x),y))With these tools in place, the proof of <Ref> is straightforward.Since ℓ_γ has range [0,1], it follows by standard properties of Rademacher complexity <cit.> that with probability at least 1-δ, every f∈ satisfies_γ(f) ≤_γ (f) + 2 ((_γ)_|S) + 3√(ln(2/δ)/2n).The bound now follows by applying <Ref> to the left hand side.§.§ Dudley Entropy IntegralThis section contains a slight variant of the standard Dudley entropy integral bound on the empirical Rademacher complexity (e.g. <cit.>), which is used in the proof of <Ref>. The presentation here diverges from standard presentations because the data metric (as in <ref>) is not normalized by √(n). The proof itself is entirely standard however — even up to constants — and is included only for completeness.Let F be a real-valued function class taking values in 0,1, and assume that 0∈F. Then(_|S) ≤inf_α>04α/√(n) + 12/n∫_α^√(n)√(log(F_|S, , ·_2))d.Let N∈ be arbitrary and let _i=√(n)2^-(i-1) for each i∈N. For each i let V_i denote the cover achieving (F_|S, _i, ·_2), so that∀f∈F∃v∈V_i*∑_t=1^n(f(x_t) - v_t)^2^1/2≤_i,and V_i = (F_|S, _i, ·_2). For a fixed f∈, let v^i[f] denote the nearest element in V__i. Then_sup_f∈∑_t=1^n_if(x_t) = _sup_f∈*∑_t=1^n_t(f(x_t)-v_t^N[f]) + ∑_i=1^N-1∑_t=1^n_t(v_t^i[f] - v_t^i+1[f]) - ∑_t=1^n_tv_t^1[f] ≤_sup_f∈*∑_t=1^n_t(f(x_t)-v_t^N[f]) + ∑_i=1^N-1_sup_f∈*∑_t=1^n_t(v_t^i[f] - v_t^i+1[f]) + _sup_f∈*∑_t=1^n_tv_t^1[f].For the third term, observe that it suffices to take V_1=*0, which implies_sup_f∈*∑_t=1^n_tv_t^1[f] = 0.The first term may be handled using Cauchy-Schwarz as follows:_sup_f∈*∑_t=1^n_t(f(x_t)-v_t^N[f])≤√(_∑_t=1^n(_t)^2)√(sup_f∈∑_t=1^n(f(x_t)-v_t^N[f])^2)≤√(n)_N.Last to take care of are the terms of the form_sup_f∈*∑_t=1^n_t(v_t^i[f] - v_t^i+1[f]).For each i, let W_i=*v^i[f]-v^i+1[f]|f∈. Then W_i≤V_iV_i+1≤V_i+1^2,_sup_f∈*∑_t=1^n_t(v_t^i[f] - v_t^i+1[f])≤_sup_w∈W_i*∑_t=1^n_tw_t,and furthermoresup_w∈W_i√(∑_t=1^nw_t^2)= sup_f∈*v^i[f] - v^i+1[f]_2≤sup_f∈*v^i[f] - (f(x_1),…,f(x_n))_2 + sup_f∈*(f(x_1),…,f(x_n)) - v^i+1[f]_2≤_i + _i+1= 3_i+1.With this observation, the standard Massart finite class lemma <cit.> implies_sup_w∈W_i*∑_t=1^n_tw_t≤√(2sup_w∈W_i∑_t=1^n(w_t)^2logW_i)≤ 3√(2logW_i)_i+1≤ 6√(logV_i+1)_i+1.Collecting all terms, this establishes _sup_f∈∑_t=1^n_tf(x_t)≤_N√(n) + 6∑_i=1^N-1_i+1√(log(F_|S, _i+1, ·_2))≤_N√(n) + 12∑_i=1^N(_i-_i+1)√(log(F_|S, _i, ·_2))≤_N√(n) + 12∫__N+1^√(n)√(log(F_|S, , ·_2))d.Finally, select any α>0 and take N be the largest integer with _N+1>α. Then _N=4_N+2<4α, and so_N√(n) + 12∫__N+1^√(n)√(log(F_|S, , ·_2))d≤ 4α√(n) + 12∫_α^√(n)√(log(F_|S, , ·_2))d.§.§ Proof of matrix covering (<Ref>)First recall the Maurey sparsification lemma.Fix Hilbert spacewith norm ·. Let U∈ be given with representation U = ∑_i=1^d α_i V_i where V_i∈ and α∈^d_≥ 0∖0. Then for any positive integer k, there exists a choice of nonnegative integers (k_1, …, k_d), ∑_i k_i = k, such thatU - α_1/k∑_i=1^d k_i V_i ^2 ≤α_1/k∑_i=1^d α_iV_i ^2 ≤α_1^2/kmax_i V_i^2.Set β := α_1 for convenience, and let (W_1, …, W_k) denote k iid random variables where [W_1 = β V_i] := α_i/β. Define W := k^-1∑_i=1^k W_i, wherebyW =W_1 = ∑_i=1^d β V_i α_i/β = U.ConsequentlyU - W^2= 1/k^2∑_i (U - W_i)^2 = 1/k^2∑_i U - W_i^2 + ∑_i ≠ jU - W_iU - W_j= 1/kU - W_1^2 = 1/k W_1^2 - U^2 ≤1/k W_1^2 = 1/k∑_i=1^d α_i/ββ V_i^2 = β/k∑_i=1^d α_iV_i^2 ≤β^2/kmax_iV_i^2.To finish, by the probabilistic method, there exists integers (j_1,…, j_k) ∈{1,…,d}^k and an assignment _i := β V_j_i and := k^-1∑_i=1^k_i such thatU - ^2 ≤U - W^2.The result now follows by defining integers (k_1,…,k_d) according to k_i := ∑_l=1^k [j_l = i]. As stated, the Maurey sparsification lemma seems to only grant bounds in terms of l_1 norms. As developed by <cit.> in the vector covering case, however, it is easy to handle other norms by rescaling the cover elements. With slightly more care, these proofs generalize to the matrix case, thus yielding the proof of <Ref>.Let matrix X∈^n× d be given, and obtain matrix Y ∈^n× d by rescaling the columns of X to have unit p-norm: Y_:, j := X_:,j / X_:,j_p. Set N:= 2dm and k := ⌈ a^2 b^2 m^2/r / ϵ^2 ⌉ and a̅ := a m^1/rX_p, and defineV_1,…, V_N :=gY_i_j^⊤ : g∈-1,+1, i ∈1,…,d, j ∈1,…,m, := a̅/k∑_i=1^N k_i V_i : k_i ≥ 0, ∑_i=1^N k_i = k= a̅/k∑_j=1^kV_i_j : (i_1,…,i_k) ∈N^k ,where the k_i's are integers. Now p≤ 2 combined with the definition of V_i and Y impliesmax_i V_i_2 ≤max_i Y_i_2 = max_i X_i_2/X_i_p≤ 1. It will now be shown thatis the desired cover. Firstly, || ≤ N^k by construction, namely by the final equality of <ref>. Secondly, let A with A_q,s≤ a be given, and construct a cover element withinusing the following technique, which follows the approach developed by <cit.> for linear prediction in which the basic Maurey lemma is applied to non-l_1 balls simply by rescaling.* Define α∈^d × m to be a “rescaling matrix” where every element of row j is equal to x_j_p; the purpose of α is to annul the rescaling of X introduced by Y, meaning XA = Y(α⊙ A) where “⊙” denotes element-wise product. Note,α_p,r =(α_:,1_p, …, α_:,m_p)_r= * (X_:,1_p,…,X_:,d_p) _p, …,(X_:,1_p,…,X_:,d_p) _p _r = m^1/r (X_:,1_p,…,X_:,d_p) _p = m^1/r∑_j=1^d X_:,j_p^p^1/p= m^1/r∑_j=1^d ∑_i=1^n X_i,j^p ^1/p = m^1/rX_p. * Define B := α⊙ A, whereby using conjugacy of ·_p,r and ·_q,s givesB_1 ≤α|A|≤α_p,rA_q,s≤ m^1/rX_p a = a̅.Consequently, XA is equal to YB = Y ∑_i=1^d ∑_j=1^m B_ij_i _j^⊤ = B_1 ∑_i=1^d ∑_j=1^m B_ij/B_1 Y _i _j^⊤∈a̅·(V_1,…,V_N),where (V_1,…,V_N) is the convex hull of V_1,…,V_N.* Combining the preceding constructions with <Ref>, there exist nonnegative integers (k_1,…,k_N) with ∑_i k_i = k withXA - a̅/k∑_i=1^N k_i V_i_2^2 = YB - a̅/k∑_i=1^N k_i V_i_2^2 ≤a̅^2/kmax_i V_i_2 ≤a^2 m^2/rX_p^2/k≤^2.The desired cover element is thus a̅/k∑_i k_i V_i ∈.§.§ A whole-network covering bound for general normsAs stated in the text, the construction of a whole-network cover via induction on layers does not demand much structure from the norms placed on the weight matrices. This subsection develops this general analysis. A tantalizing direction for future work is to specialize the general bound in other ways, namely ones that are better adapted to the geometry of neural networks as encountered in practice.The structure of the networks is the same as before; namely, given matrices = (A_1,…,A_L), define the mapping F_ as (<ref>), and more generally for i≤ L define _1^i:=(A_1,…,A_i) andF__1^i(Z) := σ_i(A_i σ_i-1(A_i-1⋯σ_1(A_1 Z) ⋯)),with the convention F_∅(Z)=Z. * Define two sequences of vector spaces _1,…,_L and _2,…,_L+1, where _i has a norm |·|_i and _i has norm ·_i.* The inputs Z ∈_1 satisfy a norm constraint |Z|_1 ≤ B. The subscript merely indicates an index, and does not refer to any l_1 norm. The vector space _1, and moreover the collection of vector spaces _i and _i, have no fixed meaning and are simply abstract vector spaces. However, when using these tools to prove <Ref>, _1 = ^d× n and Z∈_1 is formed by collecting the n data points into its columns; that is, Z=X^⊤.* The linear operators A_i : _i →_i+1 are associated with some operator norm |A_i|_i→ i+1≤ c_i:|A_i|_i→ i+1 := sup_|Z|_i ≤ 1A_i Z_i+1 = c_i.As stated before, these linear operators = (A_1,…,A_L) vary across functions F_. When used to prove <Ref>, Z is a matrix (the forward image of data matrix X^⊤ across layers), and these norms are all matrix norms.* The ρ_i-Lipschitz mappings σ_i : _i+1→_i+1 have ρ_i measured with respect to norms |·|_i+1 and ·_i+1: for any z,z' ∈_i+1,σ_i(z) - σ_i(z')_i+1≤ρ_iz - z' _i+1.These Lipschitz mappings are considered fixed within F_. Note again that these operations, when applied to prove <Ref>, operate on matrices that represent the forward images of all data points together.Lipschitz properties of the standard coordinate-wise ReLU and max-pooling operators can be found in <Ref>. Let (_1,…,_L) be given, along with fixed Lipschitz mappings (σ_1,…,σ_L) (where σ_i is ρ_i-Lipschitz), and operator norm bounds (c_1,…,c_L). Suppose the matrices = (A_1,…,A_L) lie within _1×⋯×_L where _i are arbitrary classes with the property that each A_i ∈_i has |A_i|_i→ i+1≤ c_i. Lastly, let data Z be given with |Z|_1≤ B. Then, lettingτ := ∑_j≤ L_j ρ_j ∏_l=j+1^L ρ_l c_l, the neural net images _Z := { F_(Z) : ∈_1×⋯×_L} have covering number bound_Z, τ, |·|_L+1≤ ∏_i=1^Lsup_(A_1,…,A_i-1)∀ j <iA_j ∈_j A_i F_(A_1,…,A_i-1)(Z) : A_i ∈_i, _i, ·_i+1. Inductively construct covers _1,…,_L of _2,…,_L+1 as follows. * Choose an _1-cover _1 of A_1 Z : A_1 ∈_1, thus|_1| ≤(A_1Z : A_1 ∈_1, _1, ·_2) =: N_1.* For every element F∈_i, construct an _i+1-cover _i+1(F) ofA_i+1σ_i(F) : A_i+1∈_i+1.Since the covers are proper, meaning F = A_iF_(A_1,…,A_i-1)(Z) for some matrices (A_1,…,A_i) ∈_1×⋯×_i, it follows that_i+1(F)≤sup_(A_1,…,A_i)∀ j≤ iA_j ∈_j A_i+1 F_A_1,…,A_i(Z) : A_i+1∈_i+1, _i+1, ·_i+2 =: N_i+1.Lastly form the cover_i+1 := ⋃_F∈_i_i+1(F),whose cardinality satisfies_i+1≤ |_i| · N_i+1≤∏_l=1^i+1 N_l.Define := σ_L(F) : F ∈_L; by construction,satisfies the desired cardinality constraint. to show that it is indeed a cover, fix any (A_1,…,A_L) satisfying the above constraints, and for convenience define recursively the mapped elementsF_1 = A_1 X ∈_2,G_i = σ_i(F_i) ∈_i+1 F_i+1 = A_i+1 G_i ∈_i+2 .The goal is to exhibit _L∈ satisfying |G_L - _L|_L+1≤τ. To this end, inductively construct approximating elements (_i,_i) as follows. * Base case: set _0 = X.* Choose _i ∈_i with A_i _i-1 - _i_i+1≤_i, and set _i := σ_i(_i). To complete the proof, it will be shown inductively that|G_i - _i|_i+1 ≤∑_1≤j≤ i_j ρ_j ∏_l=j+1^i ρ_l c_l.For the base case,|G_0 - _0|_1 = 0.For the inductive step,|G_i+1 - _i+1|_i+2 ≤ρ_i+1F_i+1 - _i+1_i+2≤ρ_i+1F_i+1 - A_i+1_i_i+2 + ρ_i+1A_i+1_i - _i+1_i+2≤ρ_i+1A_i+1_i+1→ i+2G_i - _i_i+1 + ρ_i+1_i+1≤ρ_i+1 c_i+1∑_j≤ i_j ρ_j ∏_l=j+1^i ρ_l c_l+ ρ_i+1_i+1= ∑_j≤ i+1_j ρ_j ∏_l=j+1^i+1ρ_l c_l.The core of the proof rests upon inequalities which break the task of covering a layer into a cover term for the previous layer (handled by induction) and another cover term for the present layer's weights (handled by matrix covering). These inequalities are similar to those in an existing covering number proof <cit.> (itself rooted in the earlier work of <cit.>); however that proof (a) operates node by node, and can not take advantage of special norms on , and (b) does not maintain an empirical cover across layers, instead explicitly covering the parameters of all weight matrices, which incurs the number of parameters as a multiplicative factor. The idea here to push an empirical cover through layers, meanwhile, is reminiscent of VC dimension proofs for neural networks <cit.>. §.§ Proof of spectral covering bound (<Ref>)The whole-network covering bound in terms of spectral and (2,1) norms now follows by the general norm covering number in <Ref>, and the matrix covering lemma in <Ref>. First dispense with the parenthetical statement regarding coordinate-wise ReLU and max-pooling operaters, which are Lipschitz by <Ref>. The rest of the proof is now a consequence of <Ref> with all data norms set to the l_2 norm (|·|_i = ·_i = ·_2), all operator norms set to the spectral norm (|·|_i→ i+1 = ·_σ), the matrix constraint sets set to _i=A_i : A_i_σ≤ s_i, A_i^⊤ - M_i^⊤_2,1≤ b_i, and lastly the per-layer cover resolutions (_1,…,_L) set according to_i := α_i/ρ_i ∏_j > iρ_j s_j α_i := 1/α̅b_i/s_i^2/3, α̅:= ∑_j=1^L b_j/s_j^2/3.By this choice, it follows that the final cover resolution τ provided by <Ref> satisfiesτ ≤∑_j≤ L_j ρ_j ∏_l=j+1^L ρ_l s_l = ∑_j≤ Lα_j= .The key technique in the remainder of the proof is to apply <Ref> with the covering number estimate from <Ref>, but centering the covers at M_i (meaning the cover at layer i is of matrices _i where A_i ∈_i satisfies A_i^⊤-M_i^⊤_2,1≤ b_i), and collecting (x_1,…,x_n) as rows of matrix X∈^n× d. To start, the covering number estimate from <Ref> can be combined with <Ref> (specifically withp=2, s=1) to giveln(_|S, , ·_2) ≤∑_i=1^Lsup_(A_1,…,A_i-1)∀ j <iA_j ∈_jln A_i F_(A_1,…,A_i-1)(X^⊤) : A_i ∈_i, _i, ·_2 (*)=∑_i=1^Lsup_(A_1,…,A_i-1)∀ j <iA_j ∈_jln F_(A_1,…,A_i-1)(X^⊤)^⊤ (A_i - M_i)^⊤ : A_i^⊤ - M_i^⊤_2,1≤ b_i, A_i_σ≤s_i, _i, ·_2 ≤∑_i=1^Lsup_(A_1,…,A_i-1)∀ j <iA_j ∈_jln F_(A_1,…,A_i-1)(X^⊤)^⊤ (A_i - M_i)^⊤ : A_i^⊤ - M_i^⊤_2,1≤ b_i, _i, ·_2 ≤∑_i=1^Lsup_(A_1,…,A_i-1)∀ j <iA_j ∈_jb_i^2 F_(A_1,…,A_i-1)(X^⊤)^⊤_2^2/_i^2ln(2W^2),where (*) follows firstly since l_2 covering a matrix and its transpose is the same, and secondly since the cover can be translated by F_(A_1,…,A_i-1)(X^⊤)^⊤ M_i^⊤ without changing its cardinality. In order to simplify this expression, note for any (A_1,…,A_i-1) thatF_(A_1,…,A_i-1)(X^⊤)^⊤_2 = F_(A_1,…,A_i-1)(X^⊤)_2= σ_i-1(A_i-1F_(A_1,…,A_i-2)(X^⊤) - σ_i-1(0)_2≤ρ_i-1A_i-1F_(A_1,…,A_i-2)(X^⊤) - 0_2≤ρ_i-1A_i-1_σF_(A_1,…,A_i-2)(X^⊤)_2,which by induction givesmax_jF_(A_1,…,A_i-1)(X^⊤)^⊤_j_2 ≤X_2 ∏_j=1^i-1ρ_jA_j_σ.Combining <ref>, then expanding the choice of _i and collecting terms,ln(_|S, , ·_2)≤∑_i=1^Lsup_(A_1,…,A_i-1)∀ j <iA_j ∈_jb_i^2 X_2^2 ∏_j<iρ_j^2 A_j_σ^2/_i^2ln(2W^2) ≤∑_i=1^Lb_i^2 B^2 ∏_j<iρ_j^2 s_j^2/_i^2ln(2W^2) = B^2 ln(2W^2)∏_j=1^Lρ_j^2 s_j^2/^2∑_i=1^Lb_i^2/α_i^2 s_i^2= B^2 ln(2W^2)∏_j=1^Lρ_j^2 s_j^2/^2α̅^3.§.§ Proof of <Ref>As an intermediate step to <Ref>, a bound is first produced which has constraints on matrix and data norms provided in advance.Let fixed nonlinearities (σ_1,…,σ_L) and reference matrices (M_1,…, M_L) be given where σ_i is ρ_i-Lipschitz and σ_i(0) = 0. Further let margin γ >0, data bound B, spectral norm bounds (s_i)_i=1^L, and l_1 norm bounds (b_i)_i=1^L be given. Then with probability at least 1-δ over an iid draw of n examples ((x_i,y_i))_i=1^n with √(∑_i x_i_2^2)≤ B, every network F_ : ^d →^k whose weight matrices = (A_1,…,A_L) obey A_i_σ≤ s_i and A_i^⊤ - M_i^⊤_2,1≤ b_i satisfies[2]_j F_(x)_j ≠ y≤_γ(f) + 8/n + 72B ln(2W) ln(n)/γ n∏_i=1^L s_i ρ_i∑_i=1^L b_i^2/3/s_i^2/3^3/2 + 3 √(ln(1/δ)/2n) . Consider the class of networks _λ obtained by affixing the ramp loss ℓ_γ and the negated margin operator - to the output of the provided network class:_γ :=(x,y)↦ℓ_γ(-(f(x),y)) : f ∈;Since (z,y) ↦ℓ_γ(-(z,y)) is 2/γ-Lipschitz wrt ·_2 by <Ref> and definition of ℓ_γ, the function class _γ still falls under the setting of <Ref>, and givesln(_γ)_|S, , ·_2≤4 B^2 ln(2W^2)/γ^2^2∏_j=1^L s_j^2ρ_j^2 ∑_i=1^L b_i/s_i^2/3^3 =: R/^2.What remains is to relate covering numbers and Rademacher complexity via a Dudley entropy integral; note that most presentations of this technique place 1/n inside the covering number norm, and thus the application here is the result of a tiny amount of massaging. Continuing with this in mind, the Dudley entropy integral bound on Rademacher complexity from <Ref> grants((_γ)_|S)≤inf_α>04α/√(n) + 12/n∫_α^√(n)√(R/^2) = inf_α>0[4]4α/√(n) + ln(√(n)/α) 12√(R)/n. The inf is uniquely minimized at α := 3√(R/n), but the desired bound may be obtained by the simple choice α := 1/n, and plugging the resulting Rademacher complexity estimate into <Ref>. The proof of <Ref> now follows by instantiating <Ref> for many choices of its various parameters, and applying a union bound. There are many ways to cut up this parameter space and organize the union bound; the following lemma makes one such choice, whereby <Ref> is easily proved. A slightly better bound is possible by invoking positive homogeneity of (σ_1,…,σ_L) to balance the spectral norms of the matrices (A_1,…,A_L), however these rebalanced matrices are then used in the comparison to (M_1,…,M_L), which is harder to interpret when M_i ≠ 0.Suppose the setting and notation of <Ref>. With probability at least 1-δ, every network F_ : ^d →^k with weight matrices = (A_1,…,A_L) and every γ > 0 satisfy [2]_j F_(x)_j ≠ y ≤_γ(F_) + 8/n+ 144ln(n)ln(2W)/γ n∏_i ρ_i1+X_2∑_i=1^L 1/L + A_i^⊤-M_i^⊤_2,1∏_j≠ i1/L +A_j_σ^2/3^3/2+ √(9/2n)√(ln(1/δ) + ln(2n/γ) + 2 ln(2 + X_2) + 2 ∑_i=1^L ln(2 + LA_i^⊤-M_i^⊤_2,1) + 2 ∑_i=1^L ln(2 + LA_i_σ) ). Given positive integers (j⃗, k⃗, l⃗) = (j_1,j_2,j_3, k_1,…, k_L,l_1,…,l_L), define a set of instances (a set of triples (γ, X, ))(j⃗, k⃗, l⃗)= (j_1,j_2,j_3, k_1, …, k_L, l_1,…, l_L) :=(γ, X, ): 0 < 1/γ< 2^j_1/n,X_2 < j_2,A_i^⊤-M_i^⊤_2,1 < k_i/L,A_i_σ < l_i/L.Correspondingly subdivide δ asδ(j⃗, k⃗, l⃗)= δ(j_1,j_2,j_3, k_1, …, k_L, l_1,…,l_L) := δ/2^j_1· j_2(j_2+1) · k_1(k_1+1) ⋯ k_L(k_L+1)· l_1(l_1+1)⋯ l_L(l_L+1).Fix any (j⃗, k⃗, l⃗). By <Ref>, with probability at least 1-δ(j⃗, k⃗, l⃗), every (γ, X, ) ∈(j⃗, k⃗, l⃗) satisfies[2]_j F_(x)_i ≠ y≤_γ(f) + 8/n+ 72· 2^j_1· j_2 ln(2W) ln(n)/n^2∏_i=1^L ρ_i∑_i=1^L k_i/L∏_j≠ il_j/L^2/3^3/2_=: + 3√(ln(1/δ) + ln(2^j_1) + 2 ln(1+j_2) + 2 ∑_i=1^L ln(1+k_i) + 2∑_i=1^L ln(1+l_i)/2n)_=:.Since ∑_j⃗ , k⃗, l⃗δ(j⃗, k⃗, l⃗) = δ, by a union bound, the preceding bound holds simultaneously over all (j⃗, k⃗, l⃗) with probability at least 1-δ.Thus, to finish the proof, discard the preceding failure event, and let an arbitrary (γ, X, ) be given. Choose the smallest (j⃗, k⃗, l⃗) so that (γ, X, ) ∈(j⃗, k⃗, l⃗); by the preceding union bound, <ref> holds for this (j⃗, k⃗, l⃗). The remainder of the proof will massage <ref> into the form in the statement of <Ref>.As such, first consider the case j_1 = 1, meaning γ < 2/n; then[2]_j F_(x)_j ≠ y ≤ 1 < 1/γ n,where the last expression lower bounds the right hand side of <ref>, thus completing the proof in the case j_1 = 1. Suppose henceforth that j_1 ≥ 2 (and γ≥ 2/n). Combining the preceding bound j_2≥ 2 with the definition of (j⃗, k⃗, l⃗), the elements of (j⃗, k⃗, l⃗) satisfy2^j_1 ≤2n/γ, j_2≤ 1 + X_2,∀ ik_i≤ 1 + LA_i^⊤-M_i^⊤_2,1,∀ il_i≤ 1 + LA_i_σ.For the term , the factors with (j⃗, k⃗, l⃗) are bounded as2^j_1· j_2 ∑_i=1^Lk_i ∏_j≠ i l_j^2/3^3/2≤2n/γ1 + X_2∑_i=1^L(L^-1+A_i^⊤-M_i^⊤_2,1) ∏_j≠ i (L^-1+A_i_σ)^2/3^3/2.For the term , the factors with (j⃗, k⃗, l⃗) are bounded asln(2^j_1) + 2 ln(1+j_2) + 2 ∑_i=1^L ln(1+k_i) + 2∑_i=1^L ln(1+l_i) ≤ln(2n/γ) + 2 ln(2 + X_2) + 2 ∑_i=1^L ln(2 + LA_i^⊤-M_i^⊤_2,1) + 2 ∑_i=1^L ln(2 + LA_i_σ).Plugging these bounds onandinto <ref> gives <ref>. The proof of <Ref> is now a consequence of <Ref>, simplifying the bound with a (·). Before proceeding, it is useful to pin down the asymptotic notation (·), as it is not completely standard in the multivariate case. The notation can be understood via the lim sup view of (·); namely, f = (g) if there exists a constant C so that any sequence ((n^(j), γ^(j), X^(j), A_1^(j),…,A_L^(j)))_j=1^∞ with n^(j)→∞, γ^(j)→∞, X^(j)_2→∞, A_i^(j)_1→∞ satisfieslim sup_j→∞f(n^(j), γ^(j), X^(j), A_1^(j),…,A_L^(j))/g(n^(j), γ^(j), X^(j), A_1^(j),…,A_L^(j))(g(n^(j), γ^(j), X^(j), A_1^(j),…,A_L^(j)))≤ C.Let f=f_0+f_1+f_2 denote the three excess risk terms of the upper bound from <Ref>, and g = g_1+g_2 denote the two excess risk terms of the upper bound from <Ref>; as discussed above, the goal is to show that there exists a universal constant C so that for any sequence of tuples ((n^(j), γ^(j), X^(j), A_1^(j),…,A_L^(j)))_j=1^∞ increasing as above, lim sup_j→∞ f / (g (g)) ≤ C.It is immediate that lim sup_j→∞ f_0 / g = 0 and lim sup_j→∞ f_1/(g_1 ln(g)) ≤ 144. The only trickiness arises when studying f_2 / (g_2ln(g)), namely the term ∑_i ln(2 + LA_i^⊤-M_i^⊤_2,1), since g_2 instead has the term ln(∑_iA_i^⊤-M_i^⊤_2,1^2/3), and the ratio of these two can scale with L. A solution however is to compare to ln(∏_i A_i_σ), noting that (A_i)^⊤_2,1≤ W^1/2A_i_2 ≤ WA_i_σ:lim sup_j→∞∑_i ln(2+L(A_i^(j))^⊤-M_i^⊤_2,1)/ln(∏_i A_i^(j)_σ) ≤lim sup_j→∞∑_i ln(2+L(A_i^(j))^⊤_2,1 + LM_i^⊤_2,1)/∑_i ln((A_i^(j))^⊤_2,1/W) =1.§.§ Proof of lower bound (<Ref>) DefineF(r) := A_Lσ_L-1(A_L-1⋯σ_2(A_2σ_1(A_1x)) : ∏_i=1^L*A_i_σ≤r,where each σ_i = σ is the ReLU and each A_k∈^d_k×d_k-1, with d_0=d and d_L=1, and let S:=(x_1,…,x_n) denote the sample. Define a new class G(r)=*x↦*a,x|*w_2≤r.It will be shown that G(r)⊆F(C·r) for some C>0, whereby the result easily follows from a standard lower bound on (G(r)_|S).Given any linear function x↦*a,x with a_2≤r, construct a network f=A_Lσ_L-1(A_L-1⋯σ_2(A_2σ_1(A_1x))) as follows: * A_1 = (_1-_2)a^⊤.* A_k = _1_1^⊤ + _2_2^⊤ for each k∈*2,…,L-1.* A_L = _1 - _2.It is now shown that f(x) = *a,x pointwise. First, observe σ(A_1x) = (σ(*a,x), σ(-*a,x),0,…,0). Since σ is positive homogeneous, σ_L-1(A_L_1⋯σ_2(A_2y) = A_L-1A_L-2⋯ A_2y = (y_1, y_2, 0,…,0) for any y in the non-negative orthant. Because σ(A_1x) lies in the non-negative orthant, this means σ_L-1(A_L-1⋯σ_2(A_2σ_1(A_1x)))= (σ(*a,x), σ(-*a,x), 0,…,0). Finally, the choice of A_L = _1 - _2 gives f(x) = σ(*a,x) - σ(-*a,x) = *a,x.Observe that for all k∈*2,…,L-1, A_k_σ=1. For the other layers, A_L_σ=A_L_2=√(2) and *A_1_σ=√(2)·r, which implies f∈F(2r).Combining the pieces,(F(2r)_|S) ≥(G(r)_|S) = sup_a:*a_2≤r∑_t=1^n_t*a,x_t = r·*∑_t=1^n_tx_t_2.Finally, by the Khintchine-Kahane inequality there exists c>0 such that*∑_t=1^n_tx_t_2≥c·√(∑_t=1^n*x_t_2^2) = c*X_2.
http://arxiv.org/abs/1706.08498v2
{ "authors": [ "Peter Bartlett", "Dylan J. Foster", "Matus Telgarsky" ], "categories": [ "cs.LG", "cs.NE", "stat.ML" ], "primary_category": "cs.LG", "published": "20170626174348", "title": "Spectrally-normalized margin bounds for neural networks" }
This line only printed with preprint option [email protected] Departamento de Entomologia e Acarologia, Escola Superior de Agricultura "Luiz de Queiroz" - Universidade de São Paulo, ESALQ - USP, 13418-900, Piracicaba/SP, [email protected] Theoretical Physics/Complex Systems, ICBM, University of Oldenburg, 26129 Oldenburg, [email protected] Institute for Complex Systems and Mathematical Biology, King’s College, University of Aberdeen, AB24 3UE, Aberdeen, United KingdomMechanosensitive (MS) channels are ion channels which act as cells' safety valves, opening when the osmotic pressure becomes too high and making cells avoid damage by releasing ions. They are found on the cellular membrane of a large number of organisms.They interact with each other by means of deformations they induce in the membrane. We show that collective dynamics arising from the inter-channel interactions lead to first and second-order phase transitions in the fraction of open channels in equilibrium relating to the formation of channel clusters. We show that this results in a considerable delay of the response of cells to osmotic shocks, and to an extreme cell-to-cell stochastic variations in their response times, despite the large numbers of channels present in each cell.We discuss how our results are relevant for E. coli.Cooperative response and clustering: consequences of membrane-mediated interactions among mechanosensitive channels Alessandro P. S. de Moura December 30, 2023 ===================================================================================================================Abrupt changes in the osmolarity of the environment is a hazard most organisms are subject to at one time or another <cit.>. A sudden drop in osmolarity (an osmotic shock) will cause water to rush into a living cell, and requires an immediate response by the cell to prevent it from getting damaged or undergoing lysis from the increased tension on the cellular membrane. Mechanosensitive channels (or MS channels) are ion channels located on the cell membrane, which open when the membrane tension becomes too high <cit.>, and play a crucial role in the cell's defence mechanism against osmotic shocks <cit.>. They act as safety valves, releasing ions and decreasing the osmotic pressure and the membrane tension. Mechanosensitive channels are found in many organisms, and have been well characterised in the bacterium E. coli <cit.>.The cellular membrane in which the mechanosensitive channels are inserted is a lipid bilayer.The interior of the bilayer is hydrophobic, making it energetically favourable for it to thicken or compress to match the hydrophobic parts of the channel proteins inserted in the membrane <cit.>. This results in a deformation profile around each channel, with the thickness of the bilayer being a function of position.This deformation mediates a short-range effective force between two neighbouring channels, similar to the force between two nearby corks floating on water, which interact through the deformation they induce on the surface of water. This interaction can be attractive or repulsive, depending on the shapes of the two molecules.Furthermore, a theoretical analysis suggests that the interaction between two neighbouring channels lowers the tension needed to open them during an osmotic shock <cit.>, raising the possibility that their function could be influenced by their spatial distribution on the membrane (as already noticed for other membrane proteins <cit.>).This is reinforced by the fact that the channels' attractive forces suggest that they may agglomerate into clusters. Our goal in this paper is to determine the consequences that the inter-channel interaction has on the dynamics of this system, focusing in particular on channel clustering and its consequences for the cell's response to osmotic shocks.A preliminary study of mechanosensitive channel clustering was done in <cit.>. In that work, diffusion, leading to the formation of clusters of channels, and opening were considered two separate processes. This assumption made the model easier to analyse, but it is hard to justify: in reality, diffusion and gating take place simultaneously.In this work, we formulate a model of the collective dynamics of mechanosensitive channels, where diffusion and gating are considered simultaneous, and no assumption of time separation between clustering and gating is made.Using a combination of analytic techniques and numerical simulations, we analyse the equilibrium and the dynamics of the system, focusing in particular on the response of the channels to osmotic shocks.We find that the interplay between the spatial and the internal degrees of freedom of the channels leads to unexpected collective phenomena, with possible implications for their biological function.We show that the fraction of open channels undergoes a phase transition as the membrane tension increases; and this transition changes from second-order to first-order as the density of channels crosses a critical value.We explain this change in the nature of the transition as the result of collective gating induced by a cluster of channels which appears for high densities.Studying the time evolution of the system after applying an osmotic shock, we find that clustering leads to dramatic changes in the channels' response, slowing down considerably their gating.In addition, clustered channels show extreme ensemble variations in their response times, despite the large numbers of channels present in each cell, what could translate into large stochastic cell-to-cell differences in response times in a population of cells.Finally, we discuss how our results are relevant for the stress response of E. coli and other organisms. In our model, we regard the cellular membrane as a two dimensional square lattice of size L, where each of the sites can be either empty or occupied by a channel.We focus on the Mechanosensitive Channels of Large Conductance (MscL), which have been well studied in this context <cit.>. The total number N of channels is regarded as constant, so that the density ρ of channels (mean number of channels per lattice site) is fixed—we are in the canonical ensemble. We consider that MscL can be in one of two states, closed or open. The interaction energies between two channels have been obtained by minimising the energy functional defined by the deformation profile <cit.>.For small distances (a few nanometers) between channels, if ε_oo, ε_cc and ε_oc are the interaction energies between a pair of open channels, a pair of closed channels, and an open and a closed channel, respectively, we have ε_oo<ε_cc<0 and ε_oc>0: two open channels and two closed channels attract each other, the attraction being stronger in the first case; and a closed channel repels an open channel (see <cit.> for the complete energy profiles). The diagram in Fig. <ref> illustrates our lattice model.If we consider that all channels are in the same state, without the possibility of gating, this model is exactly the lattice gas model. The 2D lattice gas model is exactly solvable, due to its equivalence to the Ising model <cit.>, and presents a phase transition from a homogeneous to a clustered channel distribution as the density of channels increases <cit.>.The model we describe shares some similarities with the spin-1 Ising model analysed in a mean-field approximation in the grand canonical ensemble in <cit.>, but our approach has the advantage of yielding more information about the spatial distribution of channels. We start by studying the equilibrium properties of the system, as defined by our lattice model.We use a mean-field approximation, which will allow us to write explicit expressions for the energy and entropy of the system, from which we can find its free energy.As a simplifying assumption, we assume the existence of at most one cluster. For the energy values we consider, the existence of a single cluster in equilibrium is reasonable and supported by test simulations. Let f be the fraction of channels in the cluster; the other channels are spread throughout the rest of the membrane. Furthermore, letbe the fraction of open channels within the cluster, andthe fraction of open channels outside the cluster.The three quantities f,andare the thermodynamic variables of our model.Our next job is to write the free energy of the system in terms of these variables. Cluster formation and channel gating are then studied by finding the global minimum of the free energy.For example, a cluster is present if f>0 in the state of minimum free energy.The free energy per channel, F/N, for a given temperature T, can be written asF/N = (e_int + e_mem) - Ts,where the entropy per channel, s, can be estimated via combinatorial analysis, calculating the number of configurations that channels can assume. The energy per channel is divided into two terms: the interaction among channels, e_int, and the interaction of each channel with the membrane, e_mem. For each of the configurations devised in the preceding calculation, the interaction among channels can be estimated considering that channels only interact with nearest neighbours. The interaction with the membrane depends on the difference of energies between closed and open states and the work due to the variation on the channel's area in the gating process.In the mean-field approximation, we find (see Supplementary Material for complete derivation of the results):s= k_B{ln[(1-ρ)/ρ(1-f)(1- )]+ . . fln[ρ (1-f)(1- )/(1-ρ f)(1- )]+ 1/ρln[(1-ρ f)/(1-ρ)]+ . . (1-f)ln[(1- )/] +fln[(1- )/] }; e_int=2f(ε_cc+2(ε_co-ε_cc) + +(ε_cc-2ε_co+ε_oo)( )^2); e_mem=(Δ G_0 - τΔ A)/2(2f +2(1-f) -1).Here the parameters Δ G_0, Δ A and τ are the difference between the energies of open and closed states, the difference in membrane areas between the open and closed configurations of a channel, and the membrane tension, respectively. The term Δ G_0 covers both the energetic cost of membrane deformation and the cost of changing the internal structure of the channel. We used Δ G_0=50 k_BT and Δ A = 20 ^2, following <cit.>. For these parameters, a single non-interacting channel has a 50% opening probability at the tension τ=2.5 k_bT/nm^2 <cit.>.The equilibrium distribution for this system is then given by the values f_eq, _eq and _eq which minimize the free energy, for given values of ρ and τ (in the following, the subscript eq will be omitted).The fraction of open channels on the whole lattice is given by P_o=f +(1-f). Figure <ref> shows how f and P_o vary as functions of the membrane tension, τ, for different values of the density ρ.We see in Fig. <ref>(a) that the fraction of open channels P_o undergoes a transition from nearly zero (all channels closed) to non-zero values as the membrane tension τ increases.The nature of the transition depends on the channel density ρ.For small values of ρ, the transition is continuous: P_o increases smoothly from zero beyond a critical value of τ.For ρ greater than a critical value ρ_c, however, the transition is discontinuous, with P_o jumping abruptly to a positive value at the critical tension.The key to explaining this phenomenon is in the corresponding behaviour of the cluster fraction f, depicted in Fig. <ref>(b). At low tensions, the channels are closed.Since the force between two closed channels is attractive, they can form a cluster if their density is high enough; this is the case for the two upper curves in Fig. <ref>(b).Comparing with Fig. <ref>(a), we see that these correspond to the densities for which the transition in P_o is abrupt: if a cluster already exists at low tension, P_o has a discontinuous transition.The reason for this comes from the fact that the interaction energy between two open channels is much greater than any other combination of channels, and this becomes more and more so as the tension increases, since high tensions favour the opening of the channels. In equilibrium, if one of the channels in the cluster is open, all the others are open as well, because any mixture of open and closed channels incurs a heavy cost in free energy.So at a critical tension, the whole cluster opens, and since the cluster contains a finite fraction of the channels in the cell, this results in the abrupt jump in P_o seen in Fig. <ref>(a).For lower channel densities, on the other hand, there is no cluster at lower tensions.As the tension is increased, it eventually becomes favourable for channels to open, and as they do, they will tend to bunch together in a cluster, because of the high open-open interaction energy.But because in this case there was no cluster to start with, the number of open channels will increase gradually as the tension rises, and so will the cluster size.This predicts that the cluster size f and the fraction of open channels P_o will undergo a continuous transition, and increase in tandem.This is exactly what we see in Fig. <ref>(a).In both the low-density and high-density regimes, the clustering reduces considerably the threshold for channel opening—see Fig. <ref>(a)—, which might have implications for the response of the cell to osmotic shock, as we shall see in the following.These collective phenomena are a direct consequence of the inextricable link between the spatial distribution of channels and their internal gating dynamics. In order to understand the response of the channels to an osmotic shock, we have to go beyond the equilibrium theory and look at their time-dependent activation dynamics.To study the coupled gating and diffusion dynamics, we use a Monte Carlo simulation scheme with two possible actions in each step: (i) with probability p_G, a randomly chosen channel attempts changing its state (closed/open); or, (ii) with probability 1-p_G, it attempts to move to one of its four neighbouring sites, if it is vacant. The attempts succeed with a probability of acceptance, A, according to the criterion: A = e^-βΔ E if Δ E > 0, or A = 1 if Δ E ≤ 0, where Δ E is the change in energy between final and initial configurations of the system following the attempt. Thus, the algorithm is a variation of the Kawasaki dynamics, for which the position updates are local, making it suitable for non-equilibrium simulations of the lattice gas <cit.>. The probability p_G is determined by the ratio of the rates of diffusion and gating: p_G=λ_G/(λ_G+λ_D), where λ_G=1/Δ t_G and λ_D=1/Δ t_D are the rates of gating and diffusion, given by the experimentally measured characteristic times of gating and diffusion, Δ t_G and Δ t_D, respectively. Each Monte Carlo step is given after N random choices of channels to attempt change of state or diffusion, where N is the total number of channels. We relate a Monte Carlo step, Δ t_MC, to a real time interval using the weighted average Δ t_MC=p_GΔ t_G + (1-p_G)Δ t_D.In our simulations, we have used Δ t_G = 4 μ s and Δ t_D =208 μ s <cit.> (see Supplementary material), for which we have Δ t_MC≅ 8 μ s. Since the increase in channel area during the gating process precludes the determination of a single value for the lattice constant, we had to choose it in a range of reasonable biological values. We use L=400 and ρ = 0.002 for lattice size and channel density, respectively, in accordance to typical values for E. coli (see Supplementary material). In all our simulations, we start the system from an equilibrium situation at low membrane tension τ.We then increase τ abruptly, mimicking an osmotic shock, and follow the dynamics of the channels using the algorithm described above. The value of τ is kept fixed throughout the simulationIt is instructive to compare the time evolution of a hypothetical system of non-interacting channels to that of the real system of interacting channels. After approximately 10 MC steps per channel, all the non-interacting channels are open and stay in this state until the end of the simulation (black squares, Fig. <ref>a).The behaviour of the system of interacting channels, in contrast, is governed by two processes acting on vastly different time scales (black circles, Fig. <ref>a): (i) the fast opening of the isolated channels outside the cluster; (ii) and the much slower opening of the channels in the cluster.The most striking aspect of the dynamics shown in Fig. <ref>a is the dramatic variability of the opening times of the cluster: in one run of the simulation, the cluster may open in a few microseconds, and in another it may take 100 milliseconds to open. This massive variation is a result of the long-range correlations created by the interactions between channels. The stochastic nature of the cluster is a direct effect of the nontrivial collective behavior of the interactive channels.The variation is further highlighted by the histogram of the fraction of open channels in a cell some (long) time after the osmotic shock is applied, in 500 independent runs of the Monte-Carlo simulation (see Fig. <ref>b).We see that the distribution is bimodal, with roughly similar numbers of cells with open and closed clusters, even after very long times after the shock.This means that in a population of cells subjected to osmotic shock, there will be massive differences in the response times from one cell to another, even if the cells are genetically identical and even though they feel exactly the same stress.In essence, the collective dynamics that emerged from the channel interactions amplifies stochastic fluctuations at the molecular scale to the “macroscopic”, population scale, making them potentially detectable by population assays.We note that this large variability disappears once the tension becomes strong enough. For τ = 4.0 k_bT/nm^2 and 5.0 k_bT/nm^2 (respectively, up and down-triangles in Fig. <ref>a), both clustered and freely-diffusing channels respond very quickly, with all the channels in the system opening after only 10 MC steps. Finally, we would like to emphasize that the membranes are a crowded environment and that the tight packing of channels withing this environment may introduce additional effects on gating. The most important effect appears due to the packing frustration and entropic tension. Packing frustration may lead to a decrease in the tendency to gate due to space limitation created by neighbors. Additionally, entropic tension originated by environmental crowding may have a significant influence on MS channels' conformational change due to volume exclusion <cit.>.These effects could be considered, as a first approximation, accounting to a different choice of parameters in a simple extension of our model (specifically the energy difference between open and closed states).Using fluorescence microscopy and Western blot analysis <cit.>, the average number of MscLs in native E. coli cells have been estimated between 300 and 1000 channels.These results are similar to the one obtained with ribosome profiling <cit.> that measure 360 to 560 channels per cell. These numbers take the channel density close to or above the threshold for cluster formation at low tensions. Patch-clamp experiments complemented with fluorescent and atomic force microscopy show evidence for crowding and collective response of channels in liposomes <cit.>. Other studies have shown non-homogeneous distributions of overexpressed MS channels in live bacterial cells  <cit.>. Although all these studies suggest cluster formation for native channels, the debate around this question is still open.A recent study, through use of PALM (photo-activated localization microscopy) and SPT (single particle tracking), had shown strong indications that labeling with fluorescent molecules predisposes MscL channels to form clusters <cit.>.In either case, if there is any form of channel aggregation in bacterial cells, the collective phenomena we describe here may be directly relevant for the osmotic response of bacteria.Furthermore this model can be extended to other types of channels, such as electrically sensitive ion channels, which are also expected to react cooperatively to external stimuli <cit.>. Possible evidence for the large variability in channel activation predicted by our analysis is the recent observation of very late channel gating activity in E. coli cells subject to osmotic shock <cit.>: gating was seen as long as 100 ms after the shock.Since isolated MscL channels are known to gate within a few microseconds after their tension threshold is passed, it is difficult to explain this observation if the channels do not interact. This is naturally explained by the variability of channel activation, however: Fig. <ref> shows that a cluster could take a time of the order of 100 ms to open. Another recent work <cit.> also highlights the large cell-to-cell variability of the downshock responses. Furthermore this work shows a very slow cell volume recovery, which may also indicate channels cooperative activity.Calculations based on the ionic flux through single open channels suggest that as few as 5 to 10 channels would be enough to protect a cell <cit.>.This contrasts with the recent measurements of MscL numbers on native cells, which indicate numbers of channels up to two orders of magnitude greater than this estimate.With so many channels in a native E. coli cell, simultaneous opening of all channels would lead to a drastic release of intracellular material, as well as depolarisation of the membrane potential <cit.>, with potentially fatal consequences for the cell. Hence this high expression level of channels is still a mystery.As seen in Fig. <ref>, the presence of the cluster significantly delays the opening of the whole system of channels, compared with the non-interacting case, for a shock with smaller membrane tension. Thus, clustering could provide a mean to self-regulate the simultaneous opening of a large number of channels, in order to restore the osmotic equilibrium of the cell and function as a channel reservoir if more of them are needed in case of a severe shock.This is an admittedly speculative, but plausible fitness advantage for the large numbers of channels found in E. coli.§ ACKNOWLEDGMENTS It is a pleasure to thank Ian Booth, Heloisa Galbiati and Samantha Miller for important discussions. This work was supported by the European Union Seventh Framework Programme [FP7/2007-2013] (NICHE; grant agreement 289384). apsrev4-1
http://arxiv.org/abs/1706.08893v1
{ "authors": [ "Lucas D. Fernandes", "Ksenia Guseva", "Alessandro P. S. de Moura" ], "categories": [ "physics.bio-ph", "cond-mat.stat-mech" ], "primary_category": "physics.bio-ph", "published": "20170627144556", "title": "Cooperative response and clustering: consequences of membrane-mediated interactions among mechanosensitive channels" }
Max Planck Institute for Chemical Physics of Solids, 01187 Dresden, GermanyMax Planck Institute for Chemical Physics of Solids, 01187 Dresden, Germany Physik-Department, Technische Universität München, 85748 Garching, GermanyMax Planck Institute for Chemical Physics of Solids, 01187 Dresden, GermanyMax Planck Institute for Chemical Physics of Solids, 01187 Dresden, Germany School of Physics and Astronomy, University of St Andrews, St Andrews KY16 9SS, U.KMax Planck Institute for Chemical Physics of Solids, 01187 Dresden, Germany Physik-Department, Technische Universität München, 85748 Garching, Germany07.55.Jg, 85.25.Dq, 07.50.-e, 71.18.+yTorque magnetometry is a key method to measure the magnetic anisotropy and quantum oscillations in metals. In order to resolve quantum oscillations in sub-millimeter sized samples, piezo-electric micro-cantilevers were introduced. In the case of strongly correlated metals with large Fermi surfaces and high cyclotron masses, magnetic torque resolving powers in excess of 10^4 are required at temperatures well below 1 K and magnetic fields beyond 10 T. Here, we present a new broadband read-out scheme for piezo-electric micro-cantilevers via Wheatstone-type resistance measurements in magnetic fields up to 15 T and temperatures down to 200 mK. By using a two-stage SQUID as null detector of a cold Wheatstone bridge, we were able to achieve amagnetic moment resolution of Δ m= 4×10^-15 J/T at maximal field and 700 mK, outperforming conventional magnetometers by at least one order of magnitude in this temperature and magnetic field range. Exemplary de Haas-van Alphen measurement of a newly grown delafossite, PdRhO_2, were used to show the superior performance of our setup.Application of SQUIDs to low temperature and high magnetic field measurements - Ultra low noise torque magnetometry E. Hassinger December 30, 2023 ===================================================================================================================§ INTRODUCTION A central issue of nowadays solid state physics is the down-scaling of sample size due to ever growing demands on purity and single crystallinity. This is particularly true for quantum oscillation studies of new strongly correlated electron systems, unconventional metals and superconductors <cit.>. In some cases, single crystals of these materials can only be grown in a few micron to sub-millimeter size and are therefore impracticable for conventional solid state methods. In addition, magnetization and resistance resolving powers in excess of 10^4 are required to observe de Haas-van Alphen or Shubnikov-de Haas oscillations in large Fermi surface metals.In the past, highly sensitive ac-susceptometers <cit.> and bronze-foil lever magnetometers <cit.> were used to measure magnetizations and magnetic anisotropies of bulk samples at millikelvin temperatures and in high magnetic fields, achieving resolutions of Δ m ≈ 5×10^-13 J/T <cit.>.In the quest for ever smaller quantum oscillation signals more sensitive techniques and read-out schemes are required. Ultra low noise read-out schemes, using Complementary Metal-Oxide-Semiconductor (CMOS) and High-Electron-Mobility transistors (HEMT), were implemented for ultra high source impedance <cit.> and medium to very high frequency applications <cit.> respectively. Although offering extremely high gain and bandwidth, their performance is limited by charge carrier freeze out at low temperatures, giving rise to shot and random telegraph noise. Thus low frequency MOS-based electronics are often stabilized at elevated temperatures around 100 K, where temperature drift and the associated long term gain stability become an issue. Alternatively, low temperature transformers (LTTs) provide good temperature stability at liquid helium temperatures <cit.>. Their gain and bandwidth, however, strongly depend on the matched impedance either side of the transformer. Thus LTTs have mostly been applied to circuits with low source impedance.In the preceding decades, Superconducting-Quantum Interference Devices (SQUIDs) became a new path to achieving highest signal-to noise ratios (SNRs) by virtual noiseless amplification of current signals <cit.>, outperforming the hitherto known amplifiers and LTTs.As a result, SQUIDs have been introduced to many low temperature applications such as resistance measurements <cit.>, SQUID NMR <cit.>, MRI <cit.>, ESR <cit.>, microcalorimetry <cit.> and Johnson noise thermometry <cit.> obtaining unprecedented precision. However, thus far, most of these techniques were restricted to zero or low magnetic fields, as SQUIDs are notoriously difficult to use in high magnetic fields.More recently, with the introduction of superconducting shielding, high field resistance bridges as well as SQUID magnetometers were developed, extending the range of highly sensitive resistance <cit.> and magnetization measurements <cit.> up to 14 and 7 T respectively. Whilst SQUID magnetometers became a useful tool for quantum oscillation studies of macroscopic samples below 7 T, SQUID resistance bridges suffered from excessive noise or could only be operated in static magnetic fields, making them impracticable for Shubnikov-de Haas experiments. Due to these technical limitations, neither of these techniques is suited for the study of microscopic samples of strongly correlated metals. To enable high precision magnetic measurements at high fields, piezo-electric micro-cantilever based magnetometers were introduced, measuring a sample's magnetic torque τ as the change of the piezo resistance <cit.>. Here the magnetic torque τ=B× m, where m is the sample's magnetic moment.In this article, we report on the development of a new highly sensitive, high field, millikelvin SQUID torque magnetometer to measurequantum oscillations of sub-millimeter size samples. A two-stage dc-SQUID, located in the field compensated region of our cryostat, is utilized as an ultra-low noise current amplifier in a piezo-electric micro-cantilever Wheatstone bridge. Our setup achieves a hitherto unrivaled magnetization resolution of Δ m= 5×10^-15 J/T at 15 T and 700 mK for the given temperature,magnetic field range and sample size<cit.>. The performance of our setup will be demonstrated by de Haas-van Alphen measurements of a newly grown PdRhO_2 delafossite single crystal <cit.>. Delafossites are correlated quasi-two dimensional electron systems with alternating triangular lattice layers of noble metal and transition metal oxide, showing exceptionally large electrical conductivities <cit.>. We further compare our setup to conventional unamplified and LTT-amplified circuits, showing its superior resolving power. § EXPERIMENTAL§.§ General Description An ultra-low noise, low temperature and high magnetic field torque magnetometer for sub-millimeter samples was built based on an Oxford Instruments MX400 dilution refrigerator, with a 15+2 T Nb_3Sn superconducting λ-stage magnet and a 270^∘ Swedish rotator (see Fig. <ref>). The magnet of the cryostat is designed such to provide a field compensated region at the mixing chamber and low field region (B<5 mT at full field) above. Self sensitive PRC400 piezo-resistive micro-cantilevers <cit.> were used as magnetic torque sensors, which were mounted on a silver holder on the rotator. To improve sample thermalization, the back side of the micro-cantilevers was coated with gold and the rotator was thermally connected to the mixing chamber by an oxygen annealed silver wire braid. A calibrated RuO_2 thermometer was installed on the rotator for thermometry.In the interest of achieving lowest noise levels all wiring is shielded in metal or superconductor capillaries. Superconducting wires and capillaries, that are 100 μm multi-filament CuNi-clad NbTi twisted pairs and tinned CuNi capillaries, were generally used in the field compensated and low field region of the cryostat. In the high field region between the mixing chamber and rotator, 75 μm copper twisted pairs shielded in an oxygen annealed industry-grade copper capillary were used instead. Here the annealed copper capillary acts as an almost perfect diamagnetic shield against low and high frequency fields, reducing pick-up noise from mechanical vibrations in high magnetic fields. For simplicity the micro-cantilevers and wiring on the rotator were unshielded. Capillaries and wires were heat sunk at the 1 K-pot, still, cold plate and mixing chamber to reduce thermal leaks across the dilution unit. Our setup uses a National Instruments 51.2 kS/s, 24 bit PXIe-4463 signal generator and 204.8 kS/s, 24 bit PXIe-4492 oscilloscope as the data acquisition system. Typical input noise levels of the PXIe-4492 oscilloscope are on the order of 5 to 10 nV/√()Hz in the frequency range between 10 Hz and 10 kHz. A digital lock-in program was written to emulate a standard standalone lock-in amplifier on the PXI system. Its functionality includes measuring the in- and out-of phase components, phase angle, resistance as well as higher harmonics and power spectra of up to eight channels simultaneously. §.§ Conventional Room Temperature Balancing The magnetic torque exerted on the micro-cantilever is measured as a resistance change of the piezo-electric track implanted into the cantilever. The resistance of the piezo-electric track is usually measured in a Wheatstone bridge consisting of the sample and reference cantilever (see zoom display of Fig. <ref>) as well as a room temperature potentiometer. The empty reference cantilever is used to compensate for the intrinsic temperature and magnetic field dependence of the track resistance.Generally, room temperature compensated setups suffer from low frequency rf-noise, picked up outside the cryostat, comparably large input noise of room temperature amplifiers and AD-converters as well as Johnson noise of the balancing resistors. In our first measurements with the PRC400 cantilevers, this combination of factors set a noise floor, which limited the resolution of our measurement to Δ R/R ≈1.5×10^-5 (see discussion on the performance of the setup in Sec. <ref>).§.§ Low Temperature Balancing In order to circumvent external noise sources and to boost the signal-to-noise ratio (SNR), low temperature amplification is desirable. For this a low temperature Wheatstone bridge must be implemented, balancing the micro-cantilever potential divider. In our case this cold compensation consisted of two high precision metal film SMD resistors, which were mounted in a shielded copper box to the 1 K-pot (unamplified and LTT amplified setup) or mixing chamber (SQUID setup). In the low temperature transformer setup (section <ref>) R_C=1 kΩ resistors were used. Their shielding box was weakly thermally coupled to the 1 K-pot to stabilize its temperature around 2 K. In the SQUID setup (section <ref>) the compensation resistors were R_C = R = 500 Ω and were thermally well coupled to the mixing chamber. Typical off-balance signals of the Wheatstone bridges were on the order of 0.5% (see also zero field values of Fig. <ref>a. As an additional effect of the cold compensation, the Johnson noise originating from the balancing resistors is greatly reduced especially at mixing chamber temperatures.§.§ Low Temperature Transformer AmplificationDue partly to the difficulties of using SQUID amplification in high magnetic fields, low temperature transformers were developed as an alternative low temperature amplification stage <cit.>. In the most favorable circumstances of extremely low input impedance, they can give noise levels of below 1 pV <cit.>, but they are not well matched to the high source impedance of a piezo-lever. Figure <ref> shows the associated gain and bandwidth issue. As can be seen the gain is strongly suppressed compared to its open circuit value of 1000. The sharp gain peak at 66 to 74 Hz limits the bandwidth of the LTT system. Thus cross talk between multiple simultaneous experiments amplified by LTTs can become an issue. Alternative room temperature tests with lower transformer ratios resulted in proportionally lower gains and marginally higher bandwidths, due to the tremendous impedance mismatch.Nevertheless, for our single channel experiment on PdRhO_2, we attempted using lead-shielded LTTs <cit.> to amplify the balanced voltage-signal. The amplified signal was directly fed into the PXI oscilloscope. By doing so a background noise level of 250 nV/√()Hz near the measurement frequency was achieved (see markers in Fig. <ref>a). This is a factor of twelve above the bare noise of the unamplified measurement. At the same time the signal was amplified by a factor of 25 boosting the unity bandwidth SNR by a factor two compared to unamplified case (see also Fig. <ref>).§.§ SQUID AmplificationIn order to increase the SNR even further and to avoid the bandwidth limitation of LTTs, we replaced the transformers by a two-stage C6XXL116T SQUID manufactured by the Physikalisch Technische Bundesanstalt <cit.>. SQUIDs are highly sensitive, high gain current-to-voltage converters with an ultra low amplifier noise. The SQUID in our setup is enclosed in a niobium shield and mounted to the still, which is located in a field compensated region of our dilution refrigerator (see Fig. <ref>). Biasing of the SQUID as well as the feed back loop are provided by a room temperature XXF Magnicon FLL <cit.>. The final overall circuit diagram is shown in Fig. <ref>.In going from LTTs to SQUID amplifiers, we noticed a general sensitivity of the SQUID to high frequency noise as is emitted by digital electronics, radio transmitters and switch-mode power supplies. Without proper grounding and filtering of cable shields, instruments and signal lines, we were not able to observe a VΦ-characteristic of the input stage of our two-stage SQUID. In a first attempt, a 1:1 audio-transformer (bandwidth 20 Hz to 20 kHz) and 5 kHz first order low-pass filter were installed to decouple the drive from the PXI ground and to suppress high frequency noise. An additional 1 MHz second order in-line low-pass filter was installed between the electronics rack and the cryostat to prevent differential and common mode high frequency noise from entering the cryostat <cit.>. A reference potential and high frequency drain were provided by an additional grounding point outside the cryostat. With these measures a stable operation of the SQUID was achieved.Further improvements have been made by (a) installing a R_S=50 Ω shunt resistor across the SQUID input terminals, (b) moving the balancing circuit to the mixing chamber, and (c) changing the balancing resistors to R_C=500 Ω. The shunt resistor and input inductance of the SQUID (L_i = 1.8 μH) form a first-order low-pass filter with a cut-off frequency of 4.4 MHz, prohibiting very and ultra-high frequency noise from entering the SQUID.Contrary to the LTT, the excitation frequency of the SQUID amplified Wheatstone bridge can be chosen freely. Thus we minimize the noise level by selecting 571 Hz, which is sufficiently far away from low frequency 1/f-noise and harmonics of 50 Hz.§ PERFORMANCEWe now evaluate the performance of our SQUID-amplified torque magnetometer based on a de Haas-van Alphen measurement on the delafossite PdRhO_2. For this a 300×200×50μm sized PdRhO_2 single crystal was fixed on a PRC400 micro-cantilever and installed on our cryostat (see inset of Fig. <ref>). Its magnetic torque was measured at a temperature of (700±20) mK in magnetic fields up to 15 T. Data were taken during magnetic field down sweeps at a constant rate of 30 mT/min. During a magnetic field sweep from 15 to 7.5T, i.e. in approximately four hours, we detect on the order of hundred flux jumps and on the order of ten integrator resets of our SQUID, showing the long term stability of the FLL. Whilst flux-jumps equilibrate on a time scale much shorter than our excitation frequencies and do not cause signal disturbances, integrator resets usually cause spikes in our lock-in signal. Most of these events can be traced back to broadband pulses in the main electrical power grid. Thus a post processing routine was applied to the data to remove these spikes.As can be seen in Fig. <ref>a, at high fields the paramagnetic magnetization of PdRhO_2 induces a magnetic torque proportional to B^2, which is shown as the raw output voltage of the LTT and SQUID setup. Deviations from this behavior are likely due to saturating paramagnetic impurities at low fields. Both of the presented torque data were taken with the same sample and micro-cantilever.Using the excitation current and gain of each setup, the relative resistance change Δ R/R of both Wheatstone bridges can be calculated. The measured gain of the LTT is G_LTT(66 Hz)=25 (see Fig. <ref>), whilst the SQUID gain is determined by G_S =R_f× M_in/M_f. For a feed-back resistor of R_f=10 kΩ and mutual input 1/M_in= 0.307 μA/Φ_0 and feed-back inductance 1/M_f= 42.1 μA/Φ_0 the SQUID gain is G_S=1.37 V/μA.The relative resistance change for a voltage read out Wheastone bridge (unamplified and LTT amplified) is:Δ R/R = 2(1/R+1/R_C)×Δ V/I_X,which can be derived from a voltage divider on the active side under the assumption of Δ R ≪ R (see Appendix A). For the SQUID amplified case, where the off balance current is measured, the formula is Δ I/I_X = Δ R/(R+R_C) (see Appendix A). Taking into account that R_C≈ R in our SQUID setup, we obtain:Δ R/R = 2 Δ I /I_X . Figure <ref>b shows the B^2-background subtracted magnetic torque signals of the PdRhO_2 crystal given as Δ R/R for a magnetic field angle of 2.5^∘ with respect to the crystallographic c-axis within the (100)-plane. Dominant quantum oscillations and beating of the envelope function are visible due to the presence of two adjacent frequencies. A Fourier transform of the de Haas-van Alphen oscillations in 1/B can be found in Fig. <ref>c. Here two adjacent frequencies arise due to the warping of the quasi-two dimensional cylindrical Fermi surface along the c-direction. A full angular dependence and Fermi surface topography of PdRhO_2 can be found in <cit.>. As can be seen in Fig. <ref>b and c the SQUID amplified signal shows a clearly suppressed noise level compared to the LTT amplified case.Figure <ref>a shows the linear spectral noise densities S_V for all three methods. Here S_V(f)=1/√(θ)∫_0^θ V(t)e^-2π i f tdt where θ=1 s is the captured time of the truncated Fourier transform. Whilst the unamplified and SQUID amplified technique are broadband methods, the frequency using LTTs is limited to (70±4) Hz. It should be noted that the signal peak heights in Fig. <ref> are generally not comparable between different experiments, as the zero torque off-balance signal is not zeroed in our experiment. The linear spectral noise densities near the given excitation frequencies are approximately 20 nV/√()Hz (unampl.), 250 nV/√()Hz (LTT ampl.) and 4 μV/√()Hz (SQUID ampl.) measured at the PXI scope and indicated as markers in Fig. <ref>a. In general, the spectral noise densities were found to be independent of excitation current, magnetic field and state of the magnet (persistent or driven mode). However, the noise level increased to 8 μV/√()Hz in the SQUID setup during fast field sweeps of 0.3 T/min.In order to compare the resolving powers of the unamplified, LTT and SQUID amplified circuit, the resistance noise spectra are calculated by using the gains and applying Eqn. <ref> and <ref> respectively, as before. The resulting noise spectra are shown in Fig. <ref>b. Since our post-aquisition digital lock-in uses a time constant of τ=1 s the linear noise densities are integrated over a bandwidth of 1 Hz around the measurement frequency. Close to the respective measurement frequencies, we obtain the root-mean-square resistance resolutions of Δ R/R=1.5×10^-5, 8×10^-6 and 7×10^-7 for the unamplified, LTT and SQUID amplified case (see markers in Fig. <ref>b). The SQUID amplified read-out achieves a ten to twenty times better resistance and magnetic torque resolution than conventional methods. This improved resolution can also be seen in the lower noise level (Fig. <ref>b and c) compared to the LTT amplified measurement. Note that the quoted resolutions are root-mean squares whereas Fig. <ref>b shows the absolute noise.As can be seen in Eqn. <ref> and <ref>, both resistance sensitivities scale with the excitation current and can therefore be made arbitrarily small by increasing the excitation current. However, power dissipation is a major issue for low and ultra-low temperature measurements. This is particularly true for the silicon based micro-cantilevers, with a low thermal conductance at low temperatures, when mounted in vacuum. The piezoelectric track of these cantilevers generates heat close to the sample, which is poorly thermalized to the platform. In Appendix B, we estimate the sample temperature based on the cantilever geometry and depending on the excitation current and platform temperature. As can be seen for an excitation current of I_X=8μA (I_R=4 μA through the micro-cantilever), we reach a sample temperature of approximately 700 mK.Measurements to about 200 mK are possible by reducing the excitation current to approximately 100 nA (see Fig. <ref>). Even lower temperatures at higher excitation currents might be achieved by mounting the micro-cantilever directly inside the mixing chamber or in a ^3He submersion cell. In this case, the excitation current is only limited by the Kapitza resistance between the micro-cantilever and ^3He liquid. However, much more sophisticated rotator mechanisms, such as piezo-electric rotators, would be required in order to study the angular dependence of the dHvA effect in these cells.Following the theoretical torque calibration constant of τ = 1×10^-7 Nm×Δ R/R <cit.>, we obtain a torque resolution of Δτ = 7×10^-14 Nm or equivalently magnetic moment resolution of Δ m=5×10^-15 J/T at B=15 T and T=700 mK. The latter is four orders of magnitude better than commercially available SQUID VSMs <cit.> at significantly lower base temperatures. Note that the resolution is inversely proportional to the excitation current, which itself is limited by the thermalization of the micro-cantilever and sample. Therefore, the resolution effectively decreases when lowering the sample temperature in our setup.A disadvantage of the new level of precision, granted by the SQUID, is the general sensitivity to environmental fluctuations. Although the balancing resistors of the cold compensation have a temperature stability of better than 10^-4 K^-1, minute changes of the 1 K-pot or mixing chamber temperature are sufficient to induce slowly varying backgrounds in our measurements. Thus special care had to be taken to thermally decouple the balancing circuits from the 1 K-pot and mixing chamber, whilst keeping them at a constantly low temperature.At present the resolution of our setup is mainly limited by the Johnson noise of the 50 Ω shunt resistor across the SQUID terminals (1.055 pA/√(Hz)), which is the dominant source of thermal noise and accounts for 30% of the overall noise. Additionally, the output noise of the function generator (50 fA/√(Hz)) and intrinsic SQUID noise (200 fA/√(Hz)), account for another 10 % of the noise level of 4 μV/√(Hz) / (1.37 V/μ A) = 2.9 pA/√(Hz) (for further details see Appendix C). We hypothesize, that the remaining 60 % of the observed noise originate from mechanical pick-up and random-telegraph-noise within the micro-cantilevers <cit.> as well as noise entering the FLL wiring. § CONCLUSION In summary, we have successfully developed and built a new highly sensitive, ultra-low noise torque magnetometer for sub-millimeter sized samples suitable for high magnetic fields and low temperatures. The magnetometer is based on a standard piezo-electric micro-cantilever and utilizes a two-stage SQUID as the null-detector of a cold Wheatstone bridge. We were able to demonstrate its performance in a de Haas-van Alphen experiment of the metallic delafossite PdRhO_2 down to 700 mK and up to 15 T and achieved a torque resolution Δτ=7×10^-14 Nm at an excitation current of 8 μA.This is the first successful use of a SQUID in a resistance measurement of such high resolution up to 15 T. Comparing our setup to conventional low temperature techniques, we were able to show that SQUID amplification offers up to one order of magnitude higher resolution than unamplified and low temperature transformer amplified Wheatstone bridges.Due to the general applicability of balanced bridge circuits to highly sensitive resistance, inductance and capacitance measurements, we would like to point out the possibility of applying SQUID amplified read-outs to many electrical and thermal transport, ac-susceptibility, heat-capacity as well as thermal expansion and magnetostriction experiments, even in magnetic fields up to 15 T. § ACKNOWLEDGMENTS The authors would like to acknowledge J. Saunders and A. Casey et al. at the Royal Holloway University of London for preceding fruitful discussions and ideas as well as P.-J. Zermatten for technical advice and discussion of the manuscript. Furthermore we would like to thank the Max-Planck Society and Deutsche Forschungsgemeinschaft, project "Fermi-surface topology and emergence of novel electronic states in strongly correlated electron systems", for their financial support.§ RESISTANCE RESOLUTION OF VOLTAGE AND CURRENT READ-OUT WHEATSTONE BRIDGES In this appendix, we derive the resistance resolution for voltage and current read out Wheatstone bridges. For simplicity we assume the circuit diagrams of Fig. <ref>.The voltage off-balance in a Wheatstone bridge Δ V = V_S - V_P is the difference between the sensing and passive side of the Wheatstone bridge. For large voltmeter input impedances and equal compensation resistors R_C these are given by:V_S = R+Δ R/2R+Δ R V_Xand V_P = 1/2V_X.Combining them, one obtains:Δ V= Δ R/4R+2Δ RV_X.Under the assumption of small resistance changes, i.e. Δ R ≪ R, the equation simplifies to:Δ R/R ≈4Δ V/V_X = 2Δ V/I_X(1/R+1/R_C)In a current read out bridge, the off-balance current Δ I = I_2-I_1. These can be calculated from the voltage drop across the balancing resistors:I_1 = V_1/R_C,I_2 = V_2/R_C.As the superconducting SQUID input coil presents a negligible impedance at low frequency, an ideal short between the sensing and passive side can be assumed. Thus R and R_C form parallel networks and one obtains:V_1=I_X(1/R+1/R_C)^-1,V_2=I_X(1/R+Δ R+1/R_C)^-1.Combining these with the above leads to:Δ I= I_X/R_C[RR_C+Δ RR_C/R+Δ R+R_C-RR_C/R+R_C].And by assuming Δ R ≪ R this simplifies to:Δ I/I_X = Δ R/R+R_C. § SAMPLE TEMPERATUREThe sample temperature T_S was studied as a function of the excitation current I_R for our PRC400 piezo-electric micro-cantilever <cit.>, which is mounted in vacuum and heat sunk through a thin layer of Apiezon N grease to a thermal bath of temperature T_B. T_S was determined by measuring the excitation current dependence of the quantum oscillation amplitude of a Sr_2RuO_4 and PdRhO_2 crystal (see solid points in Fig. <ref>) for T_B = 50 mK. Note that T_B increased when the current was higher than 10 μA. The sample temperatures were calculated by applying the Lifshitz-Kosevich temperature reduction term to the observed amplitude suppression. The cyclotron masses of both materials were taken from <cit.>. The error bars at low excitation currents are dominated by the resolution of the quantum oscillation amplitude and the flatness of the amplitude versus temperature curve. At high currents the error on the effective masses and the estimated zero-temperature quantum oscillation amplitude are predominant.We compare the experimental result with a geometrical model where the heat is conducted through the silicon cantilever and a thin layer of Apiezon N grease (κ_Ap(T)=10^-5 Wcm^-1K^-3× T^2)<cit.>. The effective geometry is shown in Fig. <ref>. Thermal boundary resistances were ignored. Due to their poor aspect ratio and correspondingly high thermal resistance, the gold leads connecting to the piezo-resistive tracks do not contribute to the thermalization of the cantilever and could also be ignored. The bottle-neck of the thermal path at low temperature is the thinned down end of the silicon chip which the micro-cantilevers are attached to. The result is shown as lines in Fig. <ref>. The model fits nicely for excitation currents I_R > 3 μA when the thermal conductivity of the epitaxial silicon cantilevers is taken to be three orders of magnitude lower than the literature value<cit.> for bulk crystalline silicon of κ = 2 × 10^-1 W cm^-1 K^-4× T^3. Based on the literature value and without a direct measurement using quantum oscillations of a sample with higher effective cyclotron mass, the sample temperature for I_R = 4 μA would be underestimated to be around 100 mK. For lower excitation currents the sample temperature seems to be higher than in our model, levelling off at roughly 200 mK. The difference might be ascribed to additional heating induced by rf noise not included in our model.§ NOISE SOURCES The Johnson noise at the SQUID input arises mainly from resistors in the Wheatstone bridge. The influence of external room temperature resistors is lowered by the almost perfect balancing of the Wheatstone bridge. As can be seen in Fig. <ref>b, the Johnson noise of the piezoelectric track and balancing resistor induce a current noise in the upper and lower branch of the Wheatstone bridge. The thermal noise power per unit bandwidth of each of these resistors is given by:S^2_V=4k_BTR.Note that in resistor networks the total noise is given by the sum of the mean-square noises due to the uncorrelated nature of the individual noise sources. Thus, in the low frequency limit (f≪ f_-3dB≈ 70 MHz), where the input impedance of the SQUID is ignored, the arising noise current from each branch through the SQUID is given by:S_I = √(S^2_V(R)+S^2_V(R_C))/R+R_C,where the current noise S_I is limited by the total serial resistance R+R_C of each branch. Summing over both branches, leads to the total noise current at the SQUID input:S_I = √(2×(4k_BTR+4k_BT_CR_C))/R+R_C.In the present setup R=R_C=500 Ω,T= 700 mK, and the temperature of the compensation resistors T_C=100 mK, leading to an over all Johnson-noise level of balancing resistors of S_I=200 fA/√(Hz). The shunt resistor R_S=50 Ω at T=1 K on the other hand gives rise to a Johnson noise of 1.05 pA/√(Hz) and is clearly dominating the noise of the balancing resistors.Additional noise arises from the output of the PXIe-4463 function generator. Following the manufacturers data sheet, the according output noise level is 60 nV/√(Hz) at 100 Hz. Taking into account the total impedance of the circuit and a Wheatstone bridge off-balance of 0.5%, this amounts to a noise current of S_I=50 fA/√(Hz) at the SQUID input.47 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Shoenberg(2009)]Shoenberg author author D. Shoenberg, @nooptitle Magnetic Oscillations in Metals (publisher Cambridge University Press, year 2009)NoStop [Lee, Nagaosa, and Wen(2006)]Lee06 author author P. A. Lee, author N. Nagaosa,andauthor X.-G. Wen, 10.1103/RevModPhys.78.17 journal journal Rev. Mod. Phys. volume 78, pages 17 (year 2006)NoStop [Si and Steglich(2010)]Si10 author author Q. Si and author F. Steglich,10.1126/science.1191195 journal journal Science volume 329, pages 1161 (year 2010)NoStop [Singleton and Mielke(2010)]Singleton10 author author J. Singleton and author C. Mielke, 10.1080/00107510110108681 journal journal Contemp. Phys. volume 43,pages 63 (year 2010)NoStop [Si, Yu, and Abrahams(2016)]Si16 author author Q. Si, author R. Yu,andauthor E. Abrahams, 10.1038/natrevmats.2016.17 journal journal Nature. Rev. Mat. volume 1, pages 16017 (year 2016)NoStop [Mackenzie(2017)]Mackenzie17 author author A. P. Mackenzie, 10.1088/1361-6633/aa50e5 journal journal Rep. Prog. Phys. volume 80, pages 032501 (year 2017)NoStop [Pobell(2006)]Pobell author author F. Pobell, @nooptitle Matter and Methods at Low Temperatures (publisher Springer, year 2006)NoStop [Amann et al.(2017)Amann, Nallaiyan, Montes, Wilson,and Spagna]Amann17 author author A. Amann, author M. Nallaiyan, author L. Montes, author A. Wilson,and author S. Spagna, 10.1109/TASC.2016.2639480 journal journal IEEE Trans. Appl. Supercon. volume 27, pages 3800104 (year 2017)NoStop [Brooks et al.(1987)Brooks, Naughton, Ma, Chaikin, andChamberlin]Brooks87 author author J. S. Brooks, author M. Naughton, author Y. P. Ma, author P. M. Chaikin,and author R. V. Chamberlin, 10.1063/1.1139552 journal journal Rev. Sci. Instrum. volume 58, pages 117 (year 1987)NoStop [Kampert(2012)]Kampert author author W. A. G.Kampert, @nooptitle Magnetic properties of organometallic compounds in high magnetic fields (publisher Radboud University Nijmegen, PhD Thesis, year 2012)NoStop [Albert(2015)]Albert author author S. G. Albert, @nooptitle Torque magnetometry on graphene and Fermi surfaceproperties of VB_2 and MnB_2 single crystals studied by the de Haas-van Alphen effect (publisher University of Technology Munich, PhD Thesis, year 2015)NoStop [Wilde and Grundler(2010)]Grundler author author M. A. Wilde and author D. Grundler, 10.1007/978-3-642-10553-1 title Lateral Semiconductor Nanostructures, Hybrid Systems and Nanocrystals, edition quantum materials ed., edited byeditor D. Heitmann, NanoScience and Technology (publisher Springer, year 2010)pp. pages 245–275NoStop [Libioulle et al.(2003)Libioulle, Radenovic, Bystrenova, andDietler]Libioulle03 author author L. Libioulle, author A. Radenovic, author E. Bystrenova,and author G. Dietler, 10.1063/1.1533100 journal journal Rev. Sci. Instrum. volume 74, pages 1016 (year 2003)NoStop [Proctor et al.(2015)Proctor, Smith, Jung, and Woods]Proctor15 author author J. E. Proctor, author A. W. Smith, author T. M. Jung,andauthor S. I. Woods, 10.1063/1.4923277 journal journal Rev. Sci. Instrum. volume 86, pages 073102 (year 2015)NoStop [Oukhanski et al.(2003)Oukhanski, Grajcar, Ilichev, andMeyer]Oukhanski03 author author N. Oukhanski, author M. Grajcar, author E. Ilichev,andauthor H.-G. Meyer, 10.1063/1.1532539 journal journal Rev. Sci. Instrum. volume 74, pages 1145 (year 2003)NoStop [Robinson and Talyanskii(2004)]Robinson04 author author A. M. Robinson and author V. I. Talyanskii, 10.1063/1.1790586 journal journal Rev. Sci. Instrum. volume 75, pages 3169 (year 2004)NoStop [CMR()]CMR @noopnote CMR Direct, Willow House, 100 High Street, Somersham, PE28 3EH, UK - <http://www.cmr-direct.com>NoStop [Clarke and Braginski(2004)]Clarke author author J. Clarke and author A. I. Braginski, @nooptitle The SQUID Handbook (publisher Wiley-VCH Verlag GmbH und Co. KGA,year 2004)NoStop [Drung et al.(2007)Drung, Aßmann, Beyer, Kirste, Peters, Ruede, and Schurig]Drung07 author author D. Drung, author C. Aßmann, author J. Beyer, author A. Kirste, author M. Peters, author F. Ruede,and author T. Schurig, 10.1109/TASC.2007.897403 journal journal IEEE Trans. Appl. Supercon.volume 17, pages 699 (year 2007)NoStop [Barnard and Caplin(1978)]Barnard78 author author B. R. Barnard and author A. D. Caplin, 10.1088/0022-3735/11/11/011 journal journal J. Phys. E: Sci. Instrum. volume 11, pages 1117 (year 1978)NoStop [Rowlands and Woods(1976)]Rowlands76 author author J. A. Rowlands and author S. B. Woods, 10.1063/1.1134755 journal journal Rev. Sci. Instrum. volume 47, pages 795 (year 1976)NoStop [Romero, Fleischer, andHuguenin(1989)]Romero89 author author J. Romero, author T. Fleischer, and author R. Huguenin, 10.1016/0011-2275(90)90250-G journal journal Cryogenics volume 30, pages 91 (year 1989)NoStop [Lusher et al.(1998)Lusher, Li, Digby, Reed, Cowan, Saunders, Drung, andSchurig]Lusher98 author author C. P. Lusher, author J. Li, author M. E. Digby, author R. P. Reed, author B. Cowan, author J. Saunders, author D. Drung,and author T. Schurig, 10.1016/S0964-1807(99)00016-2 journal journal Appl. Supercon. volume 6, pages 591 (year 1998)NoStop [Arnold et al.(2014)Arnold, Yager, Nyéki, Casey, Shibahara, Cowan, and Saunders]Arnold14Pro author author F. Arnold, author B. Yager, author J. Nyéki, author A. J. Casey, author A. Shibahara, author B. P. Cowan,and author J. Saunders, 10.1088/1742-6596/568/3/032020 journal journal J. Phys. Conf. Ser. volume 568, pages 032020 (year 2014)NoStop [Espy, Matlashov, and Volegov(2013)]Espy13 author author M. Espy, author A. Matlashov, and author P. Volegov, 10.1016/j.jmr.2012.11.030 journal journal J. Mag. Res. volume 228, pages 1 (year 2013)NoStop [Sakurai et al.(2011)Sakurai, Goto, Takahashi, Okubo, and Ohta]Sakurai11 author author T. Sakurai, author R. Goto, author N. Takahashi, author S. Okubo,and author H. Ohta, 10.1088/1742-6596/334/1/012058 journal journal J. Phys. Conf. Ser. volume 334, pages 012058 (year 2011)NoStop [Mears et al.(1997)Mears, Labov, Frank, Netel, Hiller, Lindemann, Chow,and Barfknecht]Mears97 author author C. A. Mears, author S. E. Labov, author M. Frank, author H. Netel, author L. J. Hiller, author M. A. Lindemann, author D. Chow,and author A. T. Barfknecht, http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=622114 journal journal IEEE Trans. Appl. Supercond. volume 7, pages 3415 (year 1997)NoStop [Enss(2005)]Enss05 editor C. Enss, ed., @nooptitle Cryogenic Particle Detectors, series Topics in Applied Physics, Vol. volume 99 (publisher Springer, year 2005)NoStop [Kempf et al.(2017)Kempf, Wegner, Fleischmann, Gastaldo, Herrmann, Papst, Richter, and Enss]Kempf17 author author S. Kempf, author M. Wegner, author A. Fleischmann, author L. Gastaldo, author F. Herrmann, author M. Papst, author D. Richter,and author C. Enss, 10.1063/1.4973872 journal journal AIP Advances volume 7, pages 015007 (year 2017)NoStop [Casey et al.(2014)Casey, Arnold, Levitin, Lusher, Nyéki, Saunders, Shibahara, van der Vliet, Yager, Drung, Schurig, Batey, Cuthbert,and Metthews]Casey14 author author A. Casey, author F. Arnold, author L. V. Levitin, author C. P. Lusher, author J. Nyéki, author J. Saunders, author A. Shibahara, author H. van der Vliet, author B. Yager, author D. Drung, author T. Schurig, author G. Batey, author M. N.Cuthbert,and author A. J.Metthews, 10.1007/s10909-014-1147-z journal journal J. Low. Temp. Phys. volume 175, pages 764 (year 2014)NoStop [Rothfuss et al.(2013)Rothfuss, Reiser, Fleischmann, andEnss]Rothfuss13 author author D. Rothfuss, author A. Reiser, author A. Fleischmann,andauthor C. Enss, 10.1063/1.4816760 journal journal Appl. Phys. Lett. volume 103, pages 052605 (year 2013)NoStop [Walker(1992)]Walker92 author author I. R. Walker, 10.1007/BF00682000 journal journal J. Low. Temp. Phys. volume 90, pages 205 (year 1992)NoStop [Barraclough(2015)]Barraclough author author J. Barraclough, @nooptitle Electrical transport properties of URhGe and BiPd at very low temperatures (publisher University of St Andrews, PhD Thesis, year 2015)NoStop [Bravin et al.(1992)Bravin, Wiegers, Wolf, and Puech]Bravin92 author author M. Bravin, author S. A. J. Wiegers, author P. E. Wolf, and author L. Puech, 10.1007/BF00694126 journal journal J. Low Temp. Phys. volume 89, pages 723 (year 1992)NoStop [Nagendran et al.(2011)Nagendran, Thirumurugan, Chinnasamy, Janawadkar, and Sundar]Nagendran11 author author R. Nagendran, author N. Thirumurugan, author N. Chinnasamy, author M. P. Janawadkar,and author C. S. Sundar, 10.1063/1.3519017 journal journal Rev. Sci. Instrum. volume 82, pages 015109 (year 2011)NoStop [QD()]QD @noopnote Quantum Design, Inc., 6325 Lusk Boulevard, San Diego, CA 92121-3733, United States - <http://www.qdusa.com>NoStop [Rossel et al.(1996)Rossel, Bauer, Zech, Hofer, Willemin, and Keller]Rossel96 author author C. Rossel, author P. Bauer, author D. Zech, author J. Hofer, author M. Willemin,and author H. Keller, 10.1063/1.362550 journal journal J. Appl. Phys. volume 79, pages 8166 (year 1996)NoStop [McCollam et al.(2011)McCollam, van Rhee, Rook, Kampert, Zeitler, and Maan]McCollam11 author author A. McCollam, author P. G. van Rhee, author J. Rook, author E. Kampert, author U. Zeitler,and author J. C. Maan, 10.1063/1.3595676 journal journal Rev. Sci. Instrum. volume 82, pages 053909 (year 2011)NoStop [Arnold et al.(2017)Arnold, Naumann, Khim, Rosner, Sunko, Mazzola, King, Mackenzie, and Hassinger]Arnold17PRB author author F. Arnold, author M. Naumann, author S. Khim, author H. Rosner, author V. Sunko, author F. Mazzola, author P. D. C.King, author A. P. Mackenzie,and author E. Hassinger, 10.1103/PhysRevB.96.075163 journal journal Phys. Rev. B volume 96, pages 075163 (year 2017)NoStop [Hicks et al.(2012)Hicks, Gibbs, Mackenzie, Takatsu, Maeno, and Yelland]Hicks12 author author C. W. Hicks, author A. S. Gibbs, author A. P. Mackenzie, author H. Takatsu, author Y. Maeno,and author E. A. Yelland, 10.1103/PhysRevLett.109.116401 journal journal Phys. Rev. Lett. volume 109, pages 116401 (year 2012)NoStop [Hicks et al.(2015)Hicks, Gibbs, Zhao, P.Kushwaha, Borrmann, Mackenzie, Takatsu, Yonezawa, Maeno, and Yelland]Hicks15 author author C. W. Hicks, author A. S. Gibbs, author L. Zhao, author P.Kushwaha, author H. Borrmann, author A. P.Mackenzie, author H. Takatsu, author S. Yonezawa, author Y. Maeno,and author E. A. Yelland, 10.1103/PhysRevB.92.014425 journal journal Phys. Rev. B volume 92, pages 014425 (year 2015)NoStop [Kushwaha et al.(2015)Kushwaha, Sunko, Moll, Bawden, Riley, Nandi, Rosner, Schmidt, Arnold, Hassinger, Kim, Hoesch, Mackenzie, and King]Kushwaha15 author author P. Kushwaha, author V. Sunko, author P. J. W. Moll, author L. Bawden, author J. M. Riley, author N. Nandi, author H. Rosner, author M. P.Schmidt, author F. Arnold, author E. Hassinger, author T. K. Kim, author M. Hoesch, author A. P. Mackenzie,and author P. D. C. King, 10.1126/sciadv.1500692 journal journal Sci. Adv. volume 1, pages e1500692 (year 2015)NoStop [Hit()]Hitachi @noopnote Hitachi High-Technologies Europe GmbH, Europark Fichtenhain A 12, 47807 Krefeld, Germany, <http://www.hht-eu.com>NoStop [Julian et al.(1994)Julian, Mackenzie, McMullan, Pfleiderer, Tautz, Walker, andLonzarich]Julian94 author author S. R. Julian, author A. P. Mackenzie, author G. J. McMullan, author C. Pfleiderer, author F. S. Tautz, author I. R. Walker, and author G. G. Lonzarich,10.1007/BF00754921 journal journal J. Low Temp. Phys. volume 95, pages 39 (year 1994)NoStop [PTB()]PTB @noopnote Physikalisch Technische Bundesanstalt, Fachbereich 7.21, Abbestr. 2-12, 10587 Berlin, Germany - <http://www.ptb.de>NoStop [Mag()]Magnicon @noopnote Magnicon GmbH, Barkhausenweg 11, 22339 Hamburg, Germany - <http://www.magnicon.com>NoStop [Bergemann et al.(2000)Bergemann, Julian, Mackenzie, NishiZaki, and Maeno]Bergemann00 author author C. Bergemann, author S. R. Julian, author A. P. Mackenzie, author S. NishiZaki,and author Y. Maeno, 10.1103/PhysRevLett.84.2662 journal journal Phys. Rev. Lett. volume 84, pages 2662 (year 2000)NoStop
http://arxiv.org/abs/1706.08350v4
{ "authors": [ "Frank Arnold", "Marcel Naumann", "Thomas Lühmann", "Andrew P. Mackenzie", "Elena Hassinger" ], "categories": [ "physics.ins-det" ], "primary_category": "physics.ins-det", "published": "20170626125920", "title": "Application of SQUIDs to low temperature and high magnetic field measurements - Ultra low noise torque magnetometry" }
firstpage–lastpage Universal phase transition and band structures for spinless nodal-line and Weyl semimetals Shuichi Murakami December 30, 2023 ============================================================================================The nature of the gaseous and dusty cloud G2 in the Galactic Centre is still under debate. We present three-dimensional hydrodynamical adaptive mesh refinement (AMR) simulations of G2, modeled as an outflow from a “compact source” moving on the observed orbit. The construction of mock position-velocity (PV) diagrams enables a direct comparison with observations and allow us to conclude that the observational properties of the gaseous component of G2 could be matched by a massive (Ṁ_w=5× 10^-7 M_⊙yr^-1) and slow (50kms^-1) outflow, as observed for T Tauri stars. In order for this to be true, only the material at larger (>100AU) distances from the source must be actually emitting, otherwise G2 would appear too compact compared to the observed PV diagrams. On the other hand, the presence of a central dusty source might be able to explain the compactness of G2's dust component. In the present scenario, 5-10 years after pericentre the compact source should decouple from the previously ejected material, due to the hydrodynamic interaction of the latter with the surrounding hot and dense atmosphere. In this case, a new outflow should form, ahead of the previous one, which would be the smoking gun evidence for an outflow scenario.accretion, accretion disks – black hole physics – Galaxy: centre – ISM: clouds – stars: winds, outflows § INTRODUCTION Since the date of its discovery, the nature of the little gaseous and dusty cloud G2 discovered by <cit.> has remained enigmatic. The Brγ, Paα and HeI recombination lines detected with the integral field spectrographs SINFONI[http://www.eso.org/sci/facilities/paranal/instruments/sinfoni/] at the VLT <cit.> and OSIRIS[http://www2.keck.hawaii.edu/inst/osiris/] at the Keck telescope <cit.> show a gas component extending both in size and velocity and following a high eccentricity Keplerian orbit <cit.>. <cit.> and <cit.> have also reported the detection of a blue-shifted component, simultaneous with the red-shifted one, consistent with G2 passing pericentre as an extended object in March-April 2014.The detections in L' and M' bands in the NIR with NACO[http://www.eso.org/sci/facilities/paranal/instruments/naco/] at the VLT <cit.> and with NIRC2[http://www2.keck.hawaii.edu/inst/nirc2/] at the Keck telescope <cit.> suggest that G2 has an unresolved dust component at a temperature of roughly 550 K[The unresolved nature of the dust component shows that the latter is more compact than the gaseous one. As a matter of fact, the large point spread function (PSF) of NACO and NIRC2 makes the size of the dust emission only marginally in discrepancy with the sizes inferred from the Brγ emission. As shown by <cit.>, the PSF in L' is also bigger than the tidal radius of a 2 M_⊙ star, hence any dusty material might still be considerably extended and unbound from a possible central object.].Several observational programs are currently monitoring its evolution[https://wiki.mpe.mpg.de/gascloud/FrontPage], also focusing on the interaction of this object with the extreme gravitational field of the 4.31×10^6 M_⊙ supermassive black hole (SMBH) centred on SgrA* <cit.> and with the hot and dense plasma accreting onto it. For example, increased emission in X-ray <cit.> and radio <cit.> have been predicted by some models of the interaction of G2 with the outer accretion flow, but no consistent back reaction from either the accretion flow or SgrA* have been detected so far <cit.>. G2's partial or total disruption might also affect the accretion rate onto SgrA* or affect the statistics and properties of flares from SgrA*: <cit.> showed that there has been an increase in the rate of X-ray bright flares since summer 2014, that might have been induced by G2's pericentre passage. However, this result is still under debate <cit.> and further monitoring of SgrA* is needed to draw any strong conclusion. Finally, <cit.> have recently presented SINFONI and NACO observations of G2 in 2015 and 2016. In these, G2 appears to have passed pericentre, keeping on following more or less the same predicted orbit.These observations are performed with the most up-to-date instruments, pushing them to the limits of their capabilities; nonetheless, given the very small scales, it is still hard to evaluate the importance of the different physical processes in play. Trying to theoretically model the origin and fate of G2 has hence turned out to be challenging, but in the last two years several studies have shed light on this peculiar object. The theoretical picture is presenting a dichotomy: G2 could be either a clump of diffuse gas and dust plunging into SgrA* or the outflow from a central source (possibly a young star) on a high eccentricity orbit around the SMBH.The first scenario has been originally proposed by <cit.>. In this context, the gas is at a temperature T≈ 10^4K and it is fully ionized by the ultraviolet (UV) radiation field produced by the nearby young and massive stars. Under the assumption of case-B recombination and of a homogeneous sphere of radius R_c≈ 1.9×10^15 cm, these authors derived a mass of M_G2≈ 1.7 × 10^28 g≈ 3 Earth masses and a uniform density of ρ_c≈6.1×10^-19 g cm^-3. Several studies have been carried out for this scenario, focusing on the evolution of G2 and on its interaction with SgrA*'s accretion flow <cit.>. G2 seems to be followed by a larger component (named G2t or “the tail”) following G2 on a similar orbit <cit.> and <cit.> have shown that G2 could be connected to the previously discovered gas/dust cloud G1 <cit.>, whose orbit can be matched by a G2-like orbit after a drag force is applied to it (; but seefor a different finding). This observational finding suggests that G2 is actually part of a much larger streamer. The idea of a gas streamer has been already proposed by <cit.>, where the streamer could be produced by tidal stripping of the outer envelope of a late-type giant star, in a close encounter of such a star with the central SMBH. Another possibility for the origin of G2, if G2 is not linked to a central object, is clump formation through the non-linear thin shell instability in colliding winds of the outer O/WR stars <cit.>.The second scenario involves a connection with a central source on G2's orbit. G2's Brγ emission could either result from the gas lost by a photoevaporating disk <cit.> or by a photoevaporating starless (proto-)planet, tidally captured by the SMBH <cit.> or produced by the interaction between an outflow from a low-mass star and the hot accretion flow <cit.> or a nova outburst <cit.>. <cit.> tried to fully explain the Brγ line-width with a combination of an accretion stream and a disk wind close to a low-mass star. However, this is not in agreement with the PV diagrams obtained by <cit.> and <cit.>, showing a spatially resolved velocity gradient consistent with tidal stretching. Finally, given the unresolved and constant-luminosity L'-band emission, <cit.> hypothesized that G2 is a binary star merger <cit.>, forming a new low-mass (<2 M_⊙) star and heating the dust component from inside. In a recent work, <cit.> showed that a relatively fast and massive outflow might also be able to reproduce both G2 and G2t at the same time, however neglecting the possible connection with the cloud G1. Differently from the present more quantitative study, focusing on reproducing only G2, the one in <cit.> is rather meant to be a proof of concept. As already mentioned, observations keep on hinting that G2 and G2t are closely related, but their connection is not fully established, yet. This led us to test both scenarios; the link and differences between the two studies are discussed in section <ref>.In this paper we focus on G2 only and present 3D simulations of an outflow scenario. Compared to the 2D simulations in <cit.>, 3D simulations represent the geometry of the problem in a more realistic way and allow a much stricter comparison with the observations. Unfortunately, the high resolution used with the 2D simulations in <cit.> cannot be reached in this 3D study. So, the current simulations should be thought as complementary to the 2D ones presented in <cit.>, rather than simple upgrades of them.In Section <ref> we describe the setup of our simulations. The results are presented in Section <ref>, where we compare them to the observations and we study the effect of the outflow parameters. Section <ref> is dedicated to a more careful discussion of the ionization of the gas and the related uncertainties and of the numerical limitations. We also compare our study with previous ones and present the advantages and disadvantages of such a scenario. Summary and final remarks can be found in Section <ref>. § SIMULATION SETUP The simulations presented in this paper were run with the Eulerian code PLUTO <cit.>. Performing these three dimensional simulations with a uniform grid is computationally not feasible <cit.>, so we adopted the adaptive mesh refinement (AMR) strategy, implemented in the code through the CHOMBO[https://seesar.lbl.gov/anag/chombo/] library. For the refinement criterion, we chose the standard one in PLUTO, based on the second derivative error norm, and we applied it to the density. The criterion has been widely tested and it is able to resolve most of G2's material at the highest resolutions. The computational domain is Cartesian (with the exception of one test run in 2D cylindrical coordinates, see Table <ref>).A two-shock Riemann solver <cit.> has been chosen for the solution of the hydrodynamic equations. The outflow is modeled in a “mechanical” way as in <cit.> and <cit.>, where the velocity is set to the constant wind value v_w and the density ρ_w is set to satisfy Ṁ_w=4π r_w^2ρ_wv_w. In order to reach a reasonable sampling of the input region, i.e. a good isotropy of the outflow, the input region's radius r_w is varying, in time, proportional to the theoretical stagnation radius R_out (see Eq. <ref>), with minimum and maximum allowed values equal to 2.10×10^14 cm and 1.05×10^15 cm, respectively. The temperature of the injected material is set to T_w=10^4K and an adiabatic index Γ=1 has been assumed <cit.>.Compared to <cit.>, the source's orbit is now a proper elliptical orbit and it has been updated to the one derived by <cit.> through Brγ observations. The orbit has been previously integrated with a leapfrog method and the source's positions and velocities are interpolated from the stored ones using a 1st order Newton polynomial formula. The hot atmosphere is modeled following the density and temperature distribution used by several authors <cit.>, i.e.,n_at≃5.60×10^3 (1/d_BH,peri) cm^-3, T_at≃7.12× 10^8(1/d_BH,peri)K, where d_BH,peri is the distance from SgrA* in units of the pericentre distance, i.e. 3×10^15 cm.This is a very idealized model and, given the uncertainties in the actual distribution of the accretion flow around SgrA*, we still decided to keep it as idealized as possible, to be able to better understand 0th-order hydrodynamical effects on G2. This would be difficult when doing more sophisticated modeling. As in <cit.> and <cit.>, we reset the atmosphere with the help of a passive tracer. In order to reproduce the outer shock propagating in the atmosphere, <cit.> did not apply the same recipe for two of their simulations. However, in these cases, the development of convective bubbles all around the SMBH region is apparent. This artifact is avoided in our approach. Finally, the SMBH's gravitational field has been modeled as a Newtonian point source with mass M_BH = 4.31 × 10^6 M_⊙ <cit.> at x, y, z= 0. We refer to <cit.> for further discussions and details about the modeling and the assumptions.As in <cit.> and differently than in <cit.>, we decided to start the simulation (and the outflow) at apocentre. As already pointed out in <cit.>, this choice is somehow arbitrary. However, if the source of G2 had been scattered via multiple encounters <cit.> from the clockwise rotating disk of young stars <cit.>, any pre-existing gas envelope would have been tidally torn apart. A list of the simulations discussed in the present paper can be found in Table <ref>.§ RESULTS The purpose of this section is to present the evolution of our new 3D simulations and their comparison to observations, in which we will focus on the new and accurate construction of mock PV diagrams and on the interpretation of the time dependence of the total Brackett-γ luminosity. §.§ The standard model We adopt a mass loss rate of Ṁ_w=5× 10^-7 M_⊙ yr^-1 and a wind velocity of v_w = 50kms^-1 for our standard model.As shown in Fig. <ref>, the evolution of the density distribution in this 3D simulation is very similar to the one of the 2D simulations in <cit.> <cit.>. The outflow is free-flowing until its ram pressure reaches the pressure of the external hot and dense atmosphere. Hence, it is composed of an inner part, whose density scales as 1/r^2 (due to the continuity equation), that is surrounded by the part of the outflow that gets shocked by the impact with the atmosphere. This shocked material is highly Rayleigh-Taylor unstable. At the beginning, the outflowing material is still in a quasi-spherical configuration, since the isotropic thermal pressure of the atmosphere is still dominant compared to the anisotropic ram pressure. At later times, the free-wind region shrinks due to the increasing thermal pressure, the ram pressure makes it asymmetric and the stripped shocked material is forming a small tail trailing the source. Overall, though more filamentary, the distribution of the outflowing gas is on large scales very similar to the one in the “diffuse cloud” simulations of <cit.>, <cit.> and <cit.>, particularly right before and after pericentre, when the material is first compressed into a thin filament by the tidal force from the SMBH and then expands, strongly increasing its cross section.Due to the asymmetry of the free-wind region and the formation of the small tail of stripped material, at the time of the observations, the central source is always in the leading part of G2. The immediate implication is that the photocentre of the emission will never be on top of the source.As expected, the simulation also shows that the emitting source becomes, at a certain point, distinguishable from the rest of G2. This might happen already around year 2019-2020, when the source creates a second peak in the density distribution (see circles in the lowermost panels of Fig. <ref>). This is a clear difference compared to the diffuse cloud simulations and the decoupling between the source and the previously emitted gas, after pericentre, could eventually be the smoking gun to understand the nature of G2. §.§ Matching the PV diagrams Compared to <cit.>, the 3D simulation now allows us to construct realistic PV diagrams, like the ones already presented in <cit.>. To do this, we first project every cell in our computational domain onto the sky plane, according to the last orbital elements derived by <cit.> for the Brγ observations. This is done through a transformation from (x,y,z,v_x,v_y,v_z) to (ra,decl,losv) where ra, decl and losv are the right ascension, the declination and the line of sight (l.o.s.) velocity, respectively. We can, from this, create a 3D histogram of the Brγ luminosity, with bin size equal to 12.5 mas for ra and decl and 69.6 km s^-1 for losv. These values correspond to the size of the 3D pixels (voxels) in a SINFONI data cube. We then apply a smoothing in all directions with FWHM equal to 81 mas in right ascension and declination and to 120 km s^-1in l.o.s. velocity. These values correspond to the spatial point spread function (PSF) and spectral resolution. At this point, every cell is spatially projected onto the derived orbit, using it as a curved slit in the (ra,decl) space <cit.>. The former operation reduces the triplet (ra,decl,losv) to a couple (pos,losv), where pos is the position on the orbit, and creates a 2D position-velocity histogram. Given the uncertainties in the luminosity discussed in Sec. <ref> and <ref>, every PV diagram is then scaled to its maximum. Noise is finally extracted from the observed PV diagrams and added to the simulated ones.The luminosity is calculated using a functional form for the case B recombination Brγ emissivity j_Brγ= 3.44×10^-27(T/10^4 K)^-1.09n_in_eerg s^-1 cm^3, (where T is the wind material temperature and n_i and n_e are the ion and electron number densities), obtained by extrapolating the values given on page 73 in <cit.> <cit.>. In Sec. <ref> we will show that the amount of emission coming from the free flowing part of the outflow is uncertain. For an outflow scenario, this is strongly dependent on the flux of ionizing photons reaching G2, which is not exactly constrained. For this reason, we present here the effect of different contributions on the total Brγ luminosity of the free flowing region. Namely, we calculate PV diagrams assuming that the latter is ionized and emits in Brγ only up to a certain inner radius r_em. r_em is hence a free parameter of our post-processing and we choose r_em=[3×10^14,10^15,3×10^15]cm. We also calculated PV diagrams for the shocked outflow material only (in the text we will denote this case with r_em=r_shock). The results are shown in Fig. <ref>, <ref> and <ref>. Due to the particularly dramatic evolution of the gas during and after pericentre, the already mentioned issues related to the luminosity discussed in Sec. <ref> and <ref> are significantly effecting the reliability of our mock PV diagrams after 2014.5. For this reason, we restrict our comparison to the pre-pericentre part of the orbit and avoid making predictions for post-pericentre PV diagrams. As mentioned in Sec. <ref>, the orbital solution of the centre of the emission is never equal to that of the source; our comparison uses a time offset of roughly half a year between simulation and observations. The match is not perfect. In principle, a quantitative comparison between observed and simulated PV diagrams might eventually be used, through an iterative repetition of the simulation, to determine the orbit of the source that produces the perfect match. However, the high number of parameters of our models might not allow a strong constraint of the source's orbit and the high computational cost of these hydrodynamic simulations does not allow such a numerical experiment. Our purpose is rather to show which mass-loss rates and velocities an outflow should have to produce a reasonable result. This is already not trivial. Hence, we stick to a qualitative comparison and use a simple constant time offset. We must stress, thus, that this offset has no strong quantitative meaning. Nonetheless, as visible in Fig. <ref> and <ref>, our standard case is able to reproduce an increase in the line-width of the Brγ emission, as in the case of the observations of G2. These figures also show that our standard model appears marginally resolved, even with some broadening of G2's size with its pericentre approach, as shown by <cit.> and <cit.>.When looking at the r_em= r_shock, 3×10^15cm cases, the simulated material has a qualitatively comparable extent, even though it fails to reproduce the high velocity material that seems to overshoot the orbit derived from observations, just before the pericentre passage. This is again due to the fact that, close to pericentre, the outflow material is asymmetrically distributed with respect to the source, with most of the material in a trailing tail. When going to smaller values of r_em, the emitting region moves to slightly higher velocities and positions on the orbit, but it becomes smaller and smaller in the PV diagrams. This is a direct consequence of the location and of the important impact of the free-wind region on the outflow emission. In fact, given Eq. <ref>, the emission measure EM∝∫ρ_wdV∝ r^-1 is diverging for small distances from the source. As a result, the more the inner part of the free-wind region is included, the more dominant the free-wind region, the smaller the emitting region visible in the PV diagrams. So, all in all, we conclude that a good match with the observations can be reached only if a tiny fraction of the free-wind region is actually emitting. This conclusion is general and can be also deduced from the parameter study in Sec. <ref>, where we show that G2 appears too small for every model, when r_em < 3× 10 ^15 cm. A probably better result could also be reached with a slightly different (more eccentric) orbital solution. In fact, uncertainties in the observations seem to give enough room for this possibility. However, testing it directly with simulations is beyond the scope of the present work.§.§ Parameter study Following <cit.>, we performed a parameter study, varying the mass-loss rate and the velocity of the outflow. We hence run models LMDOT3D and HMDOT3D with the same velocity as the standard model's one, but with a factor of 5 smaller and larger mass-loss rate, respectively. Concerning the velocity, we chose to run just the HV3D model, with wind velocity equal to v_w=250kms^-1=5× v_w,standard. As already discussed in <cit.>, given the isothermal equation of state, a temperature of T=10^4K in the injected material brings the sound speed of the wind to c_s,w≈ 10 kms^-1. As a consequence, for wind velocities too close to c_s,w, the injected thermal and ram pressure become comparable, leading to too high mass loss rates and velocities. However, <cit.> have already shown that a lower outflow velocity has the effect of reducing G2's size.Fig. <ref> shows the density maps for the three models of the parameter study. As already described in <cit.> and <cit.>, for LMDOT3D and HMDOT3D the outflow is too dense for the ram-pressure stripping to be efficient enough. Hence, the size of the outflow is mainly given by momentum equilibrium between the outflow and the external forces, namely the thermal and ram pressures of the atmosphere and the tidal force of the SMBH. This explains why LMDOT3D and HMDOT3D are respectively smaller and bigger than the standard model. In the HV3D case, the outflow is much less dense and the shocked material spreads out over a large volume. This enables the formation of a long cometary tail by efficient ram pressure stripping, as in the case of the model in <cit.>. Fig. <ref> shows the PV diagrams for our parameter study. In the case of model HMDOT3D, G2 looks too elongated when only the shocked wind material is considered, while a reasonable match to observations could eventually be reached in the case of r_em>3×10^15 cm. Model LMDOT3D is instead producing a too compact emission for every assumption on r_em. HV3D can instead result in a bimodal distribution in the PV diagrams, when looking at the emission of the shocked material only. For HV3D, the separation between the two simulated emission spots is not large enough to match the observed position of G2 and G2t on the orbit (see Fig. <ref>), but motivated our attempt to model both components with a single wind source <cit.>. When looking at the luminosity evolution in Fig. <ref>, while models LMDOT3D and HV3D have a too low luminosity <cit.>, model HMDOT3D is matching the observations when the shocked-material only is considered, while it is a factor ≈ 2 too luminous when r_em=3×10^15 cm is adopted. The first evident effect is that lower mass-loss rates or higher velocities produce globally lower luminosities. This is simply explained by Eq. <ref> and <ref>, showing that the luminosity is proportional to the integral of ρ_w^2. and that ρ_w is directly proportional to the mass-loss rate and inversely proportional to the outflow velocity. So, on a 0-th order, outflows with lower mass-loss rates and/or higher velocities are less dense (even in their shocked part) and have a lower emission measure, and vice versa. For any fixed model, a varying contribution is also given by the free-wind region, depending on the choice of r_em. This result, however, is in contradiction with what has been found by <cit.> with 2D simulations, where the shocked material was dominating the total luminosities close to pericentre. This is mainly explained by the poor resolution of the present simulations, as discussed in Sec. <ref>. As a consequence, we conclude that the absolute values of the calculated luminosities must be taken as lower limits, while the structure in the PV diagrams is a more solid and stable diagnostic tool. § DISCUSSION For a critical interpretation of the results presented in the previous section, a discussion of a few issues concerning the calculation of the Brγ luminosity is needed.The first issue is clearly visible in Fig. <ref> and <ref>: in order for this scenario to reproduce the size of G2 in the position-velocity space, most of the inner unperturbed part of the outflow must be neutral (hence, dark in recombination line emission; r_em≳ 3 × 10^15 cm). Understanding whether this is actually the case would require a solid knowledge and treatment of the source of gas ionization. A full integration of radiation in the hydrodynamical simulation is needed, but the current simulations were already extremely time consuming and a further coupling with a radiative transfer scheme would make them unfeasible. On top of this, however, most of the available studies make use of simplified and (sometimes dramatically) different assumptions on the flux of ionizing photons reaching G2. In section <ref> we present a very basic calculation of the amount of free-wind region that is actually ionized by Lyman-continuum (Lyc) photons from the surrounding stars. Such estimate contains several caveats, hence it does not have particularly strong physical basis; nonetheless, it clearly shows how the contribution of the free-flowing region can be severely dependent on the different assumptions on the Lyc photons flux on G2.Another issue concerns the effect of numerics in the simulations on the resulting Brγ luminosity and, particularly, on the luminosity curves in Fig. <ref>. We show in section <ref> that the absolute value of the total luminosity can be significantly affected by the resolution and by the geometry and symmetry used for the simulation. From this, we can conclude that the PV diagrams are more stable diagnostics, compared to luminosity curves, since they essentially represent the relative contribution to the total luminosity from different regions of G2.After such needed discussions, the second part of this section tries to compare our work to what has been done by other authors (section <ref>) and to give our model a more physical context, with a focus on the possible nature of the central source (section <ref>) and of the advantages and disadvantages of this scenario, compared to the “diffuse cloud” one (section <ref>). §.§ Ionization of the outflowIn this section we try to estimate the contribution to the total luminosity of the free-flowing region of any wind in the Galactic Centre. The following calculation is based on the assumption that the ionization of the gas fully comes from UV photons from the nearby young stars <cit.>. Unfortunately, the flux of ionizing photons reaching G2 is not well known, so we decided to stick to a very simplified analytical calculation (see section [sssec:num1]4.1.1 for a discussion about its limitations). Its main purpose consists in justifying the need of r_em as a free parameter in the analysis of our hydrodynamic simulations.The derivation is based on equating, in a one dimensional fashion, the rate of UV ionizing (Lyc) photons isotropically penetrating a spherical (“naked” free-flowing) region, whose density scales as 1/r^2, to the rate of recombinations occurring within the latter. In this way, we get the number of atoms in a free-wind shell needed to “consume” all the ionizing photons reaching G2. The thickness of this shell is depending on the total size of the free-wind region and, of course, on the amount of available Lyc photons.The inferred equation is ϕ(R_out/D)^2≈∫_R_in^R_outα_recn_en_i4π r^2dr where ϕ is the rate of emitted ionizing photons and D is the distance of G2 from the ionizing source. However, the value of these two latter quantities is not very well constrained and one must assume there is more than one emitting source. In the following, ϕ/4π D^2 will simply be the flux of ionizing photons on G2 and we will consider different numbers used in previous calculations by different authors. R_out and R_in are respectively the outer and inner radius of the ionized shell. α_rec is the total recombination coefficient and we assumed α_rec=2.59×10^-13 cm^3 s^-1, i.e. the value for case B recombination for pure hydrogen at T=10^4 K <cit.>. n_e and n_i are the number densities of the electrons and ions - respectively - in the gas and r is the distance from the source. For a 1/r^2 density profile, n_en_i≈ρ^2/μ_eμ_im_H^2≈Ṁ_w^2/16π^2v_w^2μ_eμ_i m_H^2r^4, where Ṁ_w and v_w are the mass-loss rate and velocity of the wind, respectively, μ_e=1.17 and μ_i=1.29 are the electron and ion mean weight (for solar metallicity) and m_H is the hydrogen mass.So, substituting n_en_i in Eq. <ref> and solving the integral, we can get the inner radius R_in for which there is a balance between the rate of incoming ionizing photons and the rate of recombinations, over the whole volume: R_in=[ϕ/D^24π v_w^2μ_eμ_i m_H^2 R_out^2/α_recṀ_w^2+1/R_out]^-1. The total volume of the free-wind region changes as the source moves along the orbit and encounters a higher and higher external pressure. In this case, we assume that the outer radius R_out is just the stagnation radius given by the atmosphere's thermal pressure only (we hence neglect any anisotropic pressure contributions) R_out=[Ṁ_wv_w/4π P_amb]^1/2. Substituting Eq. <ref> in Eq. <ref>, we get R_in=[ϕ/D^2v_w^3μ_eμ_i m_H^2/α_recṀ_w P_amb+(4π P_amb/Ṁ_wv_w)^1/2]^-1. For our choice of the atmosphere (see Eq. <ref> and <ref>) the ambient thermal pressure is varying with radius and so will the inner and outer radii do: R_in ≈3×10^15[1.691×10^-2ϕ_50/D_pc^2v_w,50^3d_BH,peri^2/Ṁ_w,-7+.. +44.099(1/Ṁ_w,-7v_w,50)^1/21/d_BH,peri]^-1cm, where we expressed the rate of ionizing photons in units of 10^50s^-1, D in units of pc, the wind's mass-loss rate Ṁ_w in units of 10^-7 M_⊙ yr^-1, its velocity v_w in units of 50kms^-1 and the distance from SgrA* d_BH,peri in units of the pericentre distance.We performed the calculation for v_w=50km s^-1 and Ṁ_w=10^-7,10^-6M_⊙yr^-1, assuming five different fluxes of UV photons: * In the first case, which we will call SB, we assumed the numbers used by <cit.>, i.e. ϕ_50=1 and D_pc=1. This assumption is equivalent to having a single O5 star at a constant distance of 1 pc. * In the second case, MLlow, we took numbers from <cit.> for the entire central parsec; these numbers are (more or less) matching the values provided in <cit.> and <cit.>. In particular, they assume ϕ_50=10^0.8≃ 6.31 for D_pc =1.This is a lower estimate for the UV flux given by these authors. * <cit.> also took into account the concentration of the S-stars (of spectral class B) within the very central region of the Galactic Centre. They estimate these stars to produce a total ϕ_50=0.2, but for a region of D_pc≃ 6×10^-3. This is their higher estimateand we refer to it as MLhigh. * In the fourth case, Sh04, we assumed the flux used by <cit.> for the position of the cloud in the year 2004. We consider the values derived by this author as the most reasonable ones, since they are obtained calculating the contribution of the main Wolf-Rayet stars in the young cluster, exactly taking into account their positions, from <cit.> and <cit.>, and their temperatures and luminosities, from <cit.>. In 2004, F_UV=3×10^4erg s^-1 cm^-2. If we crudely divide this value by the ionization energy of the hydrogen atom, we get the number flux of ionizing photons ϕ_50/D_pc^2≃131.* In the last case, Sh14, we assumed the flux assumed by <cit.> at G2's pericentre, namely F_UV=5.7×10^4erg s^-1 cm^-2. Close to pericentre, the flux increases due to the contribution of the star S2. Dividing by the ionization energy of the hydrogen atom, we get ϕ_50/D_pc^2≃249.In Fig. <ref> we plot the results of our simple analytical calculation. As visible in the central panel, the size of R_in first increases with G2 getting closer to SgrA* and then decreases at smaller distances. This is the result of two competing effects, i.e. the decrease of available ionizing photons and the increasing density (and number of recombinations) in the outer layer of the free-wind region with the shrinking of the stagnation radius. These two different branches are mathematically visible in Eq. <ref> and <ref>, as asymptotic branches ∝ R_out^-2∝ d_BH^-2 for large distances and ∝ R_out∝ d_BH for small ones (see also the upper panel in Fig. <ref>). It is also interesting to note that the transition between these two branches moves to larger values of d_BH for smaller values of ϕ/D^2. On the other hand, the lower panel of Fig. <ref> shows that the Brγ luminosity is a monotonic function of R_out and d_BH. This is easily understandable from Eq. <ref>: as the Brγ luminosity is directly proportional to the number of recombinations (i.e., the right hand side of the equation), it is also ∝ R_out^2.As just described, the evolution of R_in, R_in/R_out and the Brγ luminosity, as a function of the distance from the black hole is a direct result of the previous equations, hence of our assumptions. The important result is that these quantities strongly depend on ϕ/D^2, spanning orders of magnitude for d_BH corresponding to the observations (grey vertical bands). To this end, instead of the derived R_in, we adopted r_em as a free parameter in the hydrodynamic simulations in Section <ref>. §.§.§ Caveats The presented calculation contains a large number of approximations.First of all, we neglect the role of shielding due to the dense shocked material around the free-wind region; this can lead to substantially lower ionization in the free-wind region.Another extreme simplification is related to the assumed spherical symmetry of the calculation. Our Eq. <ref> is based on the idea that G2 is hit at R_out by ϕ/D^2 photons coming from all directions, which is, of course, not the case. In reality, any surrounding young star will contribute to the illumination of G2 in a different way, dependent on its spectral class and position, hence making the flux not isotropically distributed on G2's surface. In addition to this, the pressure contributions (particularly the ram and tidal ones) shaping the free-wind region will make the free-wind surface asymmetric <cit.>.Another caveat is related to the assumptions that ionizations and recombinations both occur istantaneously and that all the photons impinging on R_out are totally absorbed by the free-wind region. However, as shown in <cit.>, the timescales for these two processes might be very different. In our case, the recombination timescale ist_rec(r)=1/α_recn_i(r)≈5×10^5r^2_14v_w,50/Ṁ_w,-7 s, where r_14 is r in units of 10^14 cm. The ionization timescale is t_ion=4π D^2/σ_Hϕ≈ 2×10^5D^2_pc/ϕ_50 s, where σ_H≃6.3×10^-18 cm^2 is the cross section for neutral hydrogen and photons with energy 13.6eV. An equilibrium between ionizations and recombinations can be assumed if t_rec=t_ion, which does not always hold for our assumptions. We also ignore that a certain number of photons (i.e., those passing through the outer envelope tangentially) might escape the free-wind region before ionizing any atom.Finally, other physical processes could be important as well, such as collisional ionization from the wind <cit.> or absorption of Lyc photons by the dust embedded in G2.As already stated, the number of caveats listed here does not allow a strict use of the calculation for the modeling of G2's emission. Nonetheless, it powerfully shows that the contribution of the inner part of the outflow to the Brγ luminosity of G2 is not trivial.§.§ Resolution and numerical issues The constraining power of the absolute value of the Brγ luminosity has to be reconsidered, after the systematic study of the present 3D simulations. First of all, the shocked material has a very filamentary nature; hence, if the filaments are not properly resolved, the density of the shocked material is reduced significantly. Furthermore, as shown in Fig. <ref> and <ref>, the shocked material is efficiently mixing with the atmosphere, moving to higher temperatures. In particular, as visible in Fig. <ref>, at early stages (e.g., in year 1950.25) the material closest to the free-wind region is at temperatures of around 10^4 K, i.e. the temperature of the injected material. However, the mixing becomes faster and faster as the source reaches its pericentre: at 2013.75, most of the shocked material is immediately increasing its temperature and a relatively small fraction is at temperatures below 10^5 K. The evolution of the phase plots for the shocked material in Fig. <ref> might be misleading, since mixing with lighter material should also reduce its density with time. However, the diagram shows that the most luminous material increases its density (i.e., it moves to the right of the plot) as the source approaches pericentre; this is simply due to the fact that the outflow moves faster and it encounters higher density/pressure material on its way to the black hole. So, it is compressed more and reaching higher densities, as it gets closer to SgrA* (see also Fig. <ref>). On the other hand, the inner atmosphere is also hotter, hence the mixing leads to a large spread of the shocked material in the density-temperature phase space. The mixing in our simulations is resolution dependent, since its nature is partly numerical. This has already been shown in <cit.> for the diffuse cloud scenario. In the case of the present outflow model, mixing and resolution limitations are significant all along the orbital evolution, given the highly filamentary nature of the emitting material, and the effect of poor resolution is even less predictable. The upper panel of Fig. <ref> shows histograms for the luminosity of our standard model as a function of the density of the emitting material, for 2003.25 (i.e., the central panel of Fig. <ref> collapsed along the temperature axis). The same histogram is plotted also for the simulation stLOWRES (the same as our standard model, but with half of the resolution) and for the simulation st2D (the same as the standard model, but in a 2D cylindrical fixed grid; see Table <ref>). The luminosity distribution peaks[The luminosity is given by the product between the density and the volume occupied by gas at that density. The latter is a decreasing function of the density, explaining the presence of a peak in the histogram.] around densities of roughly 10^-19gcm^-3 for the outflow parameters of these three simulations. Though small, some discrepancy occurs between the two 3D simulations at different resolutions, particularly close to the peak of the distribution. This can account for the difference in the luminosity evolution, close to pericentre (see the lower panel of Fig. <ref>), between our standard model and model stLOWRES. The effect of resolution on the luminosity evolution is similar to the one visible in the resolution study of <cit.>.A way more significant difference occurs, instead, between the luminosity curves of our standard model and its two-dimensional counterpart st2D. As already discussed in <cit.>, simulations in 2D cylindrical coordinates suffer from some intrinsic numerical issues: in particular, the accumulation of material towards R=0, due to the tidal field of the SMBH, is artificially enhanced by the cylindrical symmetry and by the necessary reflective boundary conditions close to the axis of symmetry. For this reason, in order to compare with our standard model, we removed all the material at R<1.25×10^14 cm. However, still a significant contribution to the luminosity of st2D comes from densities higher than 2× 10^-19gcm^-3, while this is not the case for the 3D standard model, showing that the artificial compression towards the axis of symmetry might have an effect on even larger distances from the axis. Furthermore, given the cylindrical symmetry, in the calculation of the luminosity the volume of every cell is obtained by a rotation of the cell around the z axis (i.e., every cell has to be thought as a 3D annulus). As a consequence, the dense Rayleigh-Taylor fingers forming immediately around the free-wind region might have a larger volume filling factor, compared to their 3D more realistic counterparts. All in all, there is a factor ≈ 2.5 difference between the standard model and st2D, which forces us to also reconsider the luminosity curves obtained in the preliminary study of <cit.>. §.§ Comparison with previous works In addition to the adopted dimensionality and coordinate system of the simulations, there are few additional differences between the simulations in <cit.> and the present ones. The first one is that the orbit has been updated from the one derived by <cit.> to the most-recent one derived by <cit.>. The most up-to-date orbit has an orbital time and an apocentre distance that are roughly a factor of two larger than the one of the previous 2D simulations. This had the unfortunate effect of increasing the computational domain and double the integration time of our simulations, making the new simulations even more computationally expensive than previously expected.Further, compared to <cit.>, the mass-loss rate of our standard model has increased by roughly a factor 5. This has been induced by the need of matching the PV diagrams shown in Fig. <ref>. In fact, the parameters of the best model in <cit.> are roughly corresponding to the LMDOT3D model described in Sec. <ref>, which is not able to match the size of G2 in the observed PV diagrams (see Fig. <ref>). The increase in the mass-loss rate of the best model is probably due to the more accurate comparison performed here, as well as to intrinsic differences between 2D and 3D simulations. Additionally, there are major differences in the absolute value of the luminosity, as discussed in Sec. <ref>, due to differences in the resolution and perhaps in intrinsic differences between 3D cartesian and 2D cylindrical coordinates.The choice of starting the simulations at apocentre makes the present results also very different from the ones in <cit.>. In fact, the ≈200yr evolution of our models <cit.> leads to a much more extended distribution of gas, as a result of the prolongated stripping of the RTI filaments of shocked wind. This larger filling volume is fundamental for matching the observed PV diagrams. However, no major instability forms in the simulations of <cit.>, probably as a result of the too short evolution time of their models.Major differences between our simulations and the ones in <cit.> also arise around pericentre, where the bow-shocks in their simulations - particularly those starting 3 years before pericentre - are becoming broader and underdense after the pericentre passage. This might be a consequence of their more sophisticated treatment of radiative cooling. The difference might also arise from the fact that, for those simulations, <cit.> did not artificially stabilize their atmosphere. This is allowing to compute the bow shock dynamics more properly, but it has the side effect of allowing the atmosphere to become convectively unstable <cit.>.Our work is also complementary to that by <cit.>. In this work, the evolution of the stellar wind shock is studied by means of the analytic solution of <cit.>. Such estimates have the advantage of having a simple but “linear” description of the interaction between the wind and the surrounding atmosphere; however, they lack more complex hydrodynamic processes that already arise from our simulations, even with our relatively simple physical treatment.We must also point out that, besides lacking the detailed procedure to mock the instrumental effect on the processing of the simulation, the mock Brγ maps and the PV diagrams shown in <cit.>, <cit.> and <cit.> include all the outflow material present in the simulations. This choice is arbitrary, since it depends on how much of the free-wind region is actually resolved in the simulation, and can produce PV diagrams with Brγ fluxes that are spanning several order of magnitudes, in evident inconsistency with the observations (compare to the upper panel of Fig. <ref>). Furthermore, as discussed in Sec. <ref> and further on, the Brγ luminosity of a 1/r^2 density distribution depends on how much of it is actually ionized. Our more detailed post-processing of the simulation clearly shows that a more careful interpretation of the results must be applied, when dealing with this scenario.Finally, this (and the previously mentioned) works focused on reproducing only G2, while the study presented in <cit.> tries to use the same model to simultaneously explain the presence of G2 and the following G2t. In this regard, even considering the weak constraining power of the Brγ luminosity, the present study shows that there should be a significant effect of the outflow parameters on the total luminosity of the shocked gas. Hence, the present standard model and the one in <cit.> are mutually exclusive. The model described in <cit.> has the advantage of being able to give G2 and G2t a common origin (even though the physical link between these two components is yet to be fully proven), but it has to be regarded as a proof of concept study and fine tuning of the model parameters is necessary to meet all observational constraints. The present standard model, on the other hand, is only able to reproduce G2, but it seems to have a Brγ luminosity that is closer to the observed one. §.§ On the nature of the source As already shown in <cit.>, the spectral properties of G2 exclude its association with a massive star, such as the S-stars. At the same time, the mass loss rates of our models are all too high for typical winds of low-mass stars in their main sequence phase. Low-mass stars in their asymptotic giant branch or red giant phases might have comparable high mass loss rates <cit.>. Stars in these phases have a giant envelope, that usually leads to outflow velocities of the order of their escape velocities, i.e., few tens of km/s. This is indeed the case for the standard model. However, those stars would appear too bright in K_s band, compared to G2, and this possibility can be excluded.The most appealing possibility is that the source is instead a young star, such as a T Tauri <cit.>. These objects are also producing winds, but they have much lower luminosities in K_s band <cit.>. However, the parameters of the present 3D standard model (Ṁ_w=5× 10^-7 M_⊙ yr^-1 and v_w = 50kms^-1) are somehow at the extreme end of the observed ranges for T Tauri's winds, which are Ṁ_w=[10^-12,10^-7]M_⊙ yr^-1 and v_w = [50,300]km/s from the observations <cit.>. Given the short evolution time of our models (≈ 200 yr), the standard model parameters could still correspond to a phase of exceptionally higher mass-loss. Indeed, there is a well established correlation between mass accretion and outflow rates for T Tauri objects, possibly being the consequence of outflows launched from the proto-stellar accretion disk <cit.>. In such a crowded environment and given the high tidal field of the black hole, the accretion (and outflow) rates might be enhanced compared to the typical star forming regions. Extremely massive outflows have been discovered, as e.g. for the case of DG Tau <cit.>.This problem can also be partially “cured” by assuming that the outflow is biconical, i.e., it is not occupying the full solid angle. As widely shown in literature, this is indeed a much more realistic assumption for the outflows from this kind of young stellar objects <cit.>. In this case, Eq. <ref> becomes R_out, conical=[Ṁ_wv_w/4π (1-cosθ_open) P_amb]^1/2, where θ_open is the half opening angle of the outflow. So, for the same value of R_out, in the case of a biconical outflow, Ṁ_w can be a factor (1-cosθ_open) (i.e., up to a factor ≈10^-2 for half opening angles as small as ≈ 10^∘) smaller compared to the isotropic case tested here. As shown in Sec. <ref>, the stagnation radius is on a 0th order responsible for the size of the outflow; hence, to get sizes similar to the observed ones, lower mass-loss rates could be needed, compared to the ones found in our current simulations. However, the orientation of the biconical outflow with respect to the orbit is also likely effecting the distribution of the emitting material. This would add a further parameter to the present scenario and additional dedicated simulations would be needed to clarify this issue.§.§ Advantages and disadvantages of a compact source scenario As pointed out by the present and previous studies <cit.>, the compact source scenario is a highly parametric model, which makes the results strongly dependent on the assumptions made. Its intrinsic properties also make its study numerically challenging. Occam's razor would then suggest us that a diffuse cloud scenario <cit.> is to be preferred. However, more parameters can always offer more possibilities to reconcile the model and the observations. For example, <cit.> showed that the total mass of the dust embedded in G2 is probably too low to make this component dynamically important. On the other hand, <cit.> showed that the dust stays compact even close to pericentre, compared to its gaseous counterpart. It is not clear why this should happen in a diffuse cloud scenario. An outflow nature for G2 has the advantage of explaining both the extended (in position and velocity) nature of the gas component and the compactness of the dusty emission, if the latter is associated to a central young stellar object. Another open question is related to the high eccentricity of G2's orbit. This could be well explained by a formation of G2 in colliding winds in the disk, if G2 is a clump of diffuse gas <cit.>. Compared to stars, gas can more easily lose angular momentum (and energy) and the collision of stellar winds represents a very effective process, in this sense. However, the inner parsec is also very crowded with young stars (as young as T Tauri, see Sec. <ref>) and the S-stars can have similar orbital semi-major axes and can reach similarly high eccentricities.A connection to a star could then be possible. The binary merger model of <cit.> could explain the dust properties and the high eccentricity of G2 <cit.>, but so far completely neglected the existence of a significant gaseous component associated with it. Outflow models are often invoked to explain the latter (see Sec. <ref>), but often they rely on - sometimes, too simple - analytical estimates. Despite the many limitations discussed in this section, our study represents the most complete attempt to include the several (often non-linear) processes involved in a compact source scenario and to compare to the observed properties of G2, e.g., by means of accurate mock PV diagrams.Concerning the connection of G2 to G2t and G1 (see Sec. <ref>), <cit.> showed that these objects might result from the stripping of the outer envelope of a giant star by the tidal field of SgrA*. Hydrodynamical simulations of tidal disruptions of stars by SMBHs indeed show that these events might lead to the formation of a bound debris, streaming towards the SMBH on highly eccentric orbits <cit.>. The fragmentation of such a streamer might have led to G1, G2 and G2t. The formation of multiple clumps in colliding winds <cit.> is also a very reasonable explanation. G1 and the G2+G2t complex have very similar orbital and emission properties. This naturally suggests a common or similar origin. Proving that they were all born at the same location is less straight-forward; for example, G2's pre-pericentre and G1's post-pericentre orbits do not coincide perfectly and have a very different apocentre position. To reconcile the two, some loss of energy and angular momentum could have occured - mainly at pericentre -, due to the interaction of these clumps and the surrounding atmosphere <cit.>. However, <cit.> showed that G2 is keeping its original orbit even after pericentre, excluding the latter hypothesis of a strong drag of the atmosphere on these clumps. G2 and G1 could still be related, but the new findings show that these objects did not have exactly the same orbit, before pericentre. As shown by the HV3D model presented here and by the one in <cit.>, an outflow with low enough density can efficiently form a tail of stripped gas (with properties similar to the observed G2t), although the source keeps on moving (and losing new material) on a purely Keplerian orbit. The state-of-the-art models on G2's nature are all able to reproduce some of G2's properties, but also show limitations or are unable to explain other observables.Additionally, the pericentre evolution of G2 in simulations for the diffuse cloud scenario <cit.> and in our simulations look very similar and the comparison to mock PV diagrams <cit.> shows that both models might be reconciled with observations. Hence, no final conclusion can be drawn, yet. The smoking gun for understanding whether a source is embedded in G2 could come in the next 5-10 years, when a decoupling between it and the previously outflowing gas might happen after pericentre, due to the increased cross section of the latter. At that point, the hydrodynamical interaction with the accretion flow would act on G2, but not on its central source and the newly emitted material, leading to the decoupling. The luminosity of the outflow material after pericentre can strongly depend on processes that cannot be too reliably captured by the present simulations, particularly during and right after the pericentre passage (see discussion is Sec. <ref>). The gas lost by the source before the pericentre passage in our simulations <cit.> is decelerated by the hydrodynamical drag of the external accretion flow. At the same time, it is heating up, partially due to the mixing with the outer hot material, eventually leading to a substantial drop of its luminosity. Unfortunately, the mixing in the present simulations is mainly numerical. For this reason, no strong quantitative statement can currently be made, e.g., on the luminosity of old and new material and on the exact time of their decoupling. For our model we can, however, predict a non-symmetric behaviour of the gas, around the pericentre position, along its orbit (as opposed to what is expected for a purely ballistic diffuse cloud), and a “rebirth” of G2.§ SUMMARY In this work we presented 3D AMR simulations for a “compact source” scenario for G2, for which its gas component is produced by an outflow from a central source. Such a study is a natural follow-up of the study by <cit.>, performed by means of 2D higher resolution simulations.We can draw the following strong conclusions:* Relatively massive (Ṁ_w=5× 10^-7 M_⊙ yr^-1) and slow (50kms^-1), compared to main-sequence stars, outflows are needed to reproduce the emission properties of G2; furthermore, the central source must be a low mass star, due to observational constraints. This suggests that a possible source for G2 is a young stellar object, possibly a T Tauri star.* The appearance of such an outflow in the PV diagrams is strongly dependent on how much of its unperturbed region is actually emitting; if the material at distances smaller than roughly 100 AU from the source dominates the emission, G2 would always look too compact - both in size and in velocity - compared to the observations. * A reasonable comparison to the current SINFONI observations can be obtained both by the diffuse cloud simulations in <cit.> and by the present ones. However, we might be able to understand whether G2 is generated by a source or if it is a simple gas-dust diffuse cloud in the next 5-10 years. For the case of a compact source, we should then be able to observe a decoupling between the dust and gas components and a new and “fresh” G2 should reform around the dusty one, later on. Studying the “compact source” model presents more complications, compared to the “diffuse cloud” one. Still, the present can reproduce the Brγ observations and it has the advantage of being able to explain the simultaneous compactness of G2's dust component and extendedness of its gaseous one.§ ACKNOWLEDGEMENTS This project was supported by the Deutsche Forschungsgemeinschaft (DFG) priority program 1573 “Physics of the Interstellar Medium” and the DFG Cluster of Excellence “Origin and Structure of the Universe”. Computer resources for this project have been provided by the Leibniz Supercomputing Center under grants: h0075, pr86re. Alessandro Ballone would like to thank Michela Mapelli, Andrea Gatto, James Guillochon, Jorge Cuadra, Diego Calderón and his PGN colleagues for useful discussions. Most of the simulation post-processing was carried out with the yt toolkit <cit.>mnras
http://arxiv.org/abs/1706.08547v1
{ "authors": [ "A. Ballone", "M. Schartmann", "A. Burkert", "S. Gillessen", "P. M. Plewa", "R. Genzel", "O. Pfuhl", "F. Eisenhauer", "M. Habibi", "T. Ott", "E. M. George" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170626180329", "title": "3D AMR hydrosimulations of a compact source scenario for the Galactic Centre cloud G2" }
Using ECC DRAM to Adaptively Increase Memory CapacityYixin Luo^† Saugata Ghose^† Tianshi Li^† Sriram Govindan^ Bikash Sharma^ Bryan Kelly^ Amirali Boroumand^† Onur Mutlu^*,†^†Carnegie Mellon University ^Microsoft Corporation ^*ETH ZürichDecember 30, 2023 ===========================================================================================================================================================================================================================plain Modern DRAM modules are often equipped with hardware error correction capabilities, especially for DRAM deployed in large-scale data centers, as process technology scaling has increased the susceptibility of these devices to errors.To provide fast error detection and correction, error-correcting codes (ECC) are placed on an additional DRAM chip in a DRAM module.This additional chip expands the raw capacity of a DRAM module by 12.5%, but the applications are unable to use any of this extra capacity, as it is used exclusively to provide reliability for all data.In reality, there are a number of applications that do not need such strong reliability for all their data regions (e.g., some user batch jobs executing on a public cloud), and can instead benefit from using additional DRAM capacity to store extra data.Our goal in this work is to provide the additional capacity within an ECC DRAM module to applications when they do not need the high reliability of error correction.In this paper, we propose Capacity- and Reliability-Adaptive Memory (CREAM), a hardware mechanism that adapts error-correcting DRAM modules tooffer multiple levels of error protection, and provides the capacity saved from using weaker protection to applications.For regions of memory that do not require strong error correction, we either provide no ECC protection at all, or provide error detection in the form of multi-bit parity.We evaluate several layouts for arranging the data within ECC DRAM in these reduced-protection modes, taking into account the various trade-offs exposed from exploiting the extra chip.Our experiments show that the increased capacity provided by CREAM improves performance by 23.0% for a memory caching workload for databases, and by 37.3% for a commercial web search workload executing production query traces.In addition, CREAM can increase bank-level parallelism within DRAM, offering furtherperformance improvements.§ INTRODUCTIONError-correcting DRAM modules are heavily used in servers and data centers today, as DRAM has become increasingly susceptible to errors due to continued process technology scaling <cit.>.By storing error-correcting codes(ECC) within error-correcting DRAM modules, error detection and correction is performed in hardware.Today, most error-correcting (or ECC) DRAM modules employ single error correction,double error detection (SECDED) codes <cit.>.Error correction is performed when a memory request reads or writes data. For widely-used DDR3 and DDR4 DRAM, these requests are performed 64 bytes at a time.In order to limit the width of the off-chip bus between the processor and the DRAM module, this data is sent in severalsmaller data bursts (e.g., eight 64-bit data bursts for DDR3 and DDR4 DRAM).For every 64-bit data burst, an 8-bit SECDED code istransmitted alongside the data to the memory controller, which interfaces between the processor and the DRAM module.For each burst, the 8-bit SECDED code is used to determine if an error exists in the 64-bit burst, and if so, an error correction algorithm is applied within the controller to correct the data.In all, for the eight bursts of data sent, an ECC DRAM module contains 8 bytes worth of correction information.On the module, this correction data is stored on an additional DRAM chip, which operates in lockstep with the DRAM chips on the module that contain the data, and provides error correction for all of the data in memory.An ECC DRAM provides reliability at the expense of additional memory capacity. The key question we ask in this study is: Can we use the additional capacity of the extra chip in ECC DRAM when memory regions of applications do not need the reliability it provides? We make two key observations about thetrade-off between reliability and capacity.First, there are many applications that benefit from additional DRAM capacity. Page faults are costly operations, taking hundreds of microseconds to retrieve data not mapped in DRAM. Several works have demonstrated that with additional DRAM capacity, application performance improves significantly, as the additional capacity helps to significantly reduce the number of page faults that take place <cit.>. We confirm this behavior when we analyze data-intensive server workloads, which include a commercial web search application from Microsoft's production data centersSecond, there are many instances where workloads or memory regions may not benefit from error correction.This primarily happens for two reasons:(1) Several applicationsare resilient to errors, or are of low importance to server owners, andtherefore do not require full error correction <cit.>.For example, for WebSearch, a very small number of incorrect query responses does not significantly affect user quality of service <cit.>.Likewise, a cloud service provider may have little need to ensure thatclient virtual machines (VMs) operate reliably, and could offer reliability-free VMs at a lower price to fit a greater number of VMs into each machine for greater revenue.(2) Certain regions of memory may not require full error correction.At the hardware level, newer DRAM may be less susceptible to faults, and due to process variation, there may be regions of DRAM that have very low error rates <cit.>. At the software level, some data regions of an application may not need any correction as well <cit.>.Our goal in this work is to enable the additional capacity within ECC DRAM modules for applications when SECDED reliability is not required during their execution, while continuing to provide error correction for applications that need reliability.Figure <ref> shows the space ofapplications across the dimensions of reliability and capacity, and shows severalexample applications within each quadrant of the space. For applications (or memory regions) that require high reliability, but do not benefit from additional data capacity, ECC should continue to work as it has in the past, providing quick hardware error correction.For applications that do not require high reliability, but benefit from additional data capacity, we aim to convert the space used by ECC data in DRAM into additional data capacity.For those applications that require reliability and benefit from capacity, we aim to support a lower-strength reliability mechanism that allows for some, but not all of the ECC capacity to be converted into additional data capacity. At a finer granularity, the reliability requirements of memory regions also vary <cit.>. To this end, we propose Capacity- and Reliability-Adaptive Memory (CREAM), a newhardware mechanism that take advantages of the additional DRAM capacity that currently goes underutilized for applications (or memory regions) that do not require high reliability.CREAM provides two capabilities.First, it converts a portion of the space in an ECC DRAM module into non-ECC mode, freeing up the space in the additional ECC chip so it can store application data.We propose three solutions that expose all of this capacity to applications: (1) a method that requires no changes to the ECCDRAM module, using additional reads and writes issued by the memory controller to access the extra space; (2) a method that adds simple logic to the DRAMmodule to reduce the write overhead to the extra space, and (3) a method that reorganizes the entire data layout so that instead of accessing nine chips at a time in each of the eight banks, we can access only eight chips at a time, allowing us to use the leftover chips as an additional DRAM bank. Second, CREAM converts part of the space in an ECC DRAM module into parity mode, where parity checks are provided instead of full-blown SECDED correction, allowing applications to maintain lower-strength reliability while stillbenefiting from additional data space.We perform two studies to gauge the effectiveness of CREAM.First, we evaluate CREAM on large-memory workloads.We execute production query traces on a commercial web search application from Microsoft, and findthat the 12.5% increase in DRAM capacity provided by CREAM improves the workload's overall system performance by 37.3%.We also find that CREAM improves the performance of a memcached database workload by 23.0%, including all overheads.Second, we find that that the increased bank-level parallelism allows CREAM to provide performance gains (0.8% for , and 2.4% on average across 40 multiprogrammed workloads),on top of the gains from having additional effective memory capacity.In this work, we make the following contributions: * We provide a simple and practical mechanismto efficiently harness part or all of the additional space previously set aside for error correction within an ECC DRAM module, providing additional data capacity to applications and memory regions that don't require high reliability. * We propose three methods of increasing data capacity by 12.5% when applications or memory regions do not require error correction or detection.One of these methodsincreases both DRAM capacity and bank-level parallelism, providingadditive performance improvements. * We propose a method of exposing additional data capacity without fully eliminating reliability, by supporting multi-bit parity for lightweight error detection.We evaluate this method quantitatively. * Our evaluations with major data-intensive applications show that using the additional space that is otherwise dedicated for ECC improves performance significantly, mitigating the high penalty of page faults.§ BACKGROUNDTo understand the opportunities available for expanding memory capacity when strong reliability is not required, we first provide necessary background on DRAM organization and error correction. §.§ DRAM OrganizationDRAM communicates with the processor across a DRAM channel, an off-chip bus used to send DRAM commands and data. For DDR3 and DDR4DRAM, this channel is only 64 bits wide, and is used to communicate a single piece of data at a time (known as a data burst).DRAM performs operations at the granularity of a 64-byte cache line.As a result, eight back-to-back data bursts are required to send a single cache line of data.Data requests are managed by a memory controller, which typically resides on-chip with the processor. The memory controller receives per-cache-line memory requests, and breaks these requests down into a series of DRAM commands that are issued to DRAM.A DRAM module (i.e., a DIMM, or dual inline memory module) is made up of several DRAM chips.Each chip has a fixed data width (i.e., the amount of data that it can transmit at any given time).For example, an x8 DRAM chip can transmit 8 bits of data at a time. Several of these chips work in lockstep to provide 64 bits of partial data from a single cache line, as shown inFigure <ref>.The chips working together in lockstep are known as a rank.For x8 DRAM chips, each rank contains eight chips, as shown in Figure <ref>. In order to work in lockstep, the chips within a rankshare the command and address wires, ensuring that they all perform the same operation on the same location. Within each DRAM chip, data is stored within two-dimensional arrays of capacitive DRAM cells. The array is accessed one row at a time, and the row being operated on must be activated (i.e. opened),which brings the contents of the entire row into a row buffer.A memory request to a row already opened within the row buffer is known as a row buffer hit.In contrast, if a memory request wants to access a row other than the one currently open, it must first close the current row (precharge), and then activate the desired row; this is known as a row buffer miss. To increase the probability of a row buffer hit, data is mapped into the DRAM module to maximize row buffer locality, by ensuring that adjacent columns of data within the same row map to adjacent data within the same OS page.[Each row typically contains multiple OS pages, but to simplify our explanations without loss of generality, we assume throughout this paperthat each DRAM row contains only a single page.]In part to increase row buffer locality, the two-dimensional cell array is split into multiple banks, each with its own row buffer.These banks can independently service requests in parallel (known as bank-level parallelism).In DDR3 DRAM, there are eight banks per chip, and since the chips within a rank operate in lockstep,there are effectively eight banks available in each rank (seeFigure <ref>).DDR4 DRAM chips contain 16 banks per rank. §.§ Error Protection in Memory Occasionally, DRAM is susceptible to bit errors when data is being read or written <cit.>.These errors can either be hard (i.e., an intrinsic defect within the DRAM itself) or soft (i.e., a transient error that can occur due to phenomena such as cosmicrays) <cit.>. Memory errors have the potential to greatly impact application stability.If a memory error goes undetected, it can lead to silent data corruption, and can alter critical data or cause a system crash.To mitigate these memory errors, a popular DRAM error correction mechanism, SECDED (single error correction, double error detection) is widely used in today's server memory <cit.>. SECDED can correct one error and detect two errors, using 8 bits of ECC information for every 64 bits of data, with low logical complexity. A common variant of DRAM that directly encodes SECDED in hardware is known as ECC memory, where all of the data within DRAM is protected.This allows error protection to be performed entirely in hardware as part of every memory request.For every 64-bit data burst during a request, an 8 bit SECDED code (stored in an additional DRAM chip) is also read out in lockstep, and transmitted back to the memory controller.Note that this expands the off-chip data bus to 72 bits.Within the controller, each data burst is checked using the SECDED code to detect whether an error has occurred, and either correct the data if it can or notify the system that data has been corrupted.Figure <ref> shows how data pages and ECC are laid out within an ECC DRAM module.To simplify our explanations, the data layout figures in this paper assume that (1) each DRAM row stores a single OS page,(2) there is a single DRAM channel, and (3) the DRAM channel contains only a single rank.In order to maximize row buffer locality (seeSection <ref>), we arrange physical pages such that consecutively-numbered pages map to different banks. We show the data layout from two views: the first row across all banks (the top ofFigure <ref>), and the first eight rows within Bank 0 (bottom).As mentioned above, providing SECDED codes for all of the data in DRAM requires manufactures to add additional chips onto each DRAM module.The additional chip expands the raw capacity of the DRAM module by 12.5%(since we add 8 bits for every 64 bits of data).However, the effective DRAM capacity remains unchanged with respect to a DRAM module without ECCsupport, as this additional chip is exclusively used to store theerror-correcting codes. § MOTIVATION: CAPACITY VS. RELIABILITYDRAM reliability currently takes a one-size-fits-all approach, providing strong error correction for all data, but this results in significant reliability over-provisioning, which impacts the revenue of cloud providers and hence the cost for customers.In this section, we identify that variability in reliability exists in data centers, and study opportunities to exploit this variability to optimize total cost of ownership (TCO). §.§ Asymmetric Reliability RequirementsWe find that there are two sources of the inherent asymmetry in reliability requirements: (1) server/cloud applications require varying levels of reliabilitybased on several factors, and (2) there is heterogeneity in the reliability offered by the hardware itself. Application Resiliency Variability: Resiliency, or memory error tolerance, refers to the ability of server/cloud applications to cope with memory errors. Application resiliency can involve three important aspects: (1) tolerating the performance penalty of error detection or correction, (2) enduring potential data corruption from memory errors <cit.>, and/or (3) dealing with unavailability due to a server crash/reboot. Cloud applications are known to exhibit varied resiliency to memory errors <cit.>. We observe variation in applications' resiliency across four dimensions:* Application role: while certain applications, like banking and in-memory databases, are highly sensitive to memory errors, applications such as front-tier state-less applications or video streaming may be more tolerant; * Criticality: OS/Hypervisor regions may require high reliability, unlike guest virtual machines or user applications; * Address space: certain parts of the application address-space (e.g., stack/code regions) may be more sensitive to memory errors than others (e.g., heap/data regions); and * Access mode: read-only/clean memory areas are more amenable to recoverability from memory errors than written/dirty memory regions <cit.>.These dimensions of variability can be leveraged to perform cost-effective memory hardware provisioning — mapping sensitive/critical regions to reliable memory hardware (with error correction) and high-resilient regions to less protected memory hardware (with error detection or no protection).Note that variation of application data resiliency over time, due to changes in workload/client behavior, may require these regions to be remapped to the hardware.Hardware Health Variability: Large-scale studies have shown that DRAM within servers exhibits significant reliability variation <cit.>.DRAM errors have been shown to be concentrated within a small fraction of weak cells (i.e., error prone cells or slow cells), and the behavior of errors has shown relative stability over time <cit.>.As a result, the reliability variation of DRAM can be used to perform long-term relaxation of memory protection.For example, healthy DRAM DIMMs may initially be provisioned with parity protection. As the health of the memory degrades, the protection can be upgraded to stronger protection (e.g., SECDED).Cloud platforms commonly employ simple memory health/error monitoring techniques <cit.>, which can be leveraged to adjust the level of error protection. §.§ Leveraging Reliability Asymmetry for CapacityA key consequence of the asymmetry-aware memory provisioning discussed in Section <ref> is the additional memory capacity that it offers compared to the current one-size-fits-all provisioning approach.As we discussed in Section <ref>, storage for SECDED data incurs a 12.5% overhead.Eliminating SECDED protection frees up this 12.5% of storage for additional data, while performing only error detection frees up 10.7% ofadditional memory.Data centers can leverage this additional memory capacity to optimize TCO in two main ways.First, memory is often the bottleneck resource in determining the hosting capacity of a cloud platform <cit.>.An increase in memory capacity is likely to correspond to an increase in the number of hosted virtual machines on a cloud, which directly contributes to cloud revenue/profit.Second, the impact of memory capacity to application performance is well studied in literature <cit.> – a small amount of additional capacity, when allocated to the right application, is known to provide non-linear performance improvements. Cloud platforms can offer opportunistic memory allocation (similar to ballooning <cit.>) to applications with high memory demand, resulting in improved application performance and customer satisfaction.We quantify the performance improvement from additional memory capacity using an interactive WebSearch cloud application from Microsoft, running production search queries. WebSearch stores several hundred gigabytes of search indexes in persistent storage, and uses DRAM as a cache for storing frequently-accessed index data. We can relax the ECC protection for WebSearch to gain a 12.5% capacity increase, as prior work has shown that web search applications can tolerate a large number of memory errors <cit.>. Figure <ref> shows that memory capacity plays a crucial role in the workload percentile latency.[For business reasons, we anonymize the latency and capacity numbers.] We normalize both the percentile latency on the y-axis and the load on the x-axis to their largest observed values. Each curve shows the percentile latency for WebSearch with different memory sizes, w, x, y, and z. By comparing these curves, we make two observations. First, we look at these curves under the highest normalized load (10).We find that if error protection is eliminated, an approximately 12.5% increase in capacity results in significant latency improvement (e.g., 67% from x to y, and 24% from w to x). It is well known that latency plays a crucial role to revenue of cloud workloads <cit.>. Thus, it is desirable to keep the percentile latency low. Second, we look at the highest load that guarantees a low percentile latency (e.g., 20% on the y-axis).We find that, by increasing memory capacity by about 12.5%, load capacity for WebSearch doubles. §.§ Need for Dynamic DRAM Error Protection As we can see, there are tangible benefits to exploiting reliability variationin DRAM to increase its capacity.Realizing these capacity benefits relies on the server/cloud to offer heterogeneous and configurable error protection in memory.Though it is possible to statically provision error protection by using different memory hardware across servers/clusters, this approach has two key limitations: (1) the optimal amount of memory allocated for a certain level of protection may vary over time due to changing workloadand hardware behavior, which could result in under- or over-provisioning when using static partitioning; and (2) sourcing server hardware components relies on pricing advantages associated with procuring commodity components in bulk, which will be disrupted if DIMMs with different reliability schemes must beprocured. We envision a cloud that can dynamically configure its memory resources, bothwithin and across servers, to offer any combination of memory error protection based on varying application/hardware demands.Our goal in this work is to design a mechanism that can dynamically repartition a single type of DRAM to support multiple reliability schemes. § CREAM DESIGNAs we see in Section <ref>, there are several applications that do not require error correction, and can benefit from additional DRAM capacity.However, while ECC DRAM provides additional raw capacity within each DRAM module to store error-correcting codes, this capacity cannot be used by applications that do not require error correction.In this work, we propose Capacity- and Reliability-Adaptive Memory (CREAM), a hardwaremechanism that allows applications without strong reliability requirements toexploit the additional ECC DRAM capacity to store more user data (and reducethe number of page faults). CREAM exposes the additional DRAM capacity by rearranging how data is stored in a portion of the ECC DRAM.In CREAM, part of the DRAM supports error detection or no correction/detection, for applications, memory regions, or highly-reliable DRAM that do notrequire it, while part of the DRAM continues to support SECDED correction for others that require high reliability.The size of the two parts can be adjusted dynamically, based on the mix of applications being run on the server, and on the health of the DRAM.Figure <ref> shows how data is traditionally stored alongside theSECDED code within an ECC DRAM.The layout of data remains unchanged for the high-reliability portion of DRAM in CREAM.We propose several solutions to to rearranging data when no correction or detection is required (Section <ref>), each of which has distinct advantages and overheads.For all of these solutions, the effective DRAM capacity increases by12.5% within the unprotected region.We also propose a solutionthat supports error detection (Section <ref>), whichcan increase the DRAM capacity within the region by 10.7% while protecting against silent data corruption.To support two regions of memory with different levels of reliability, CREAM requires additional, low-cost hardware support (Section <ref>).Small modifications are needed within the memory controller to make it aware of the change in hardware layout.Several, but not all, of our solutions require a small bridge chip on the DRAM DIMM to enable rank subsetting (i.e., decoupling the chips within a rank so that not all of them operate in lockstep) to optimize performance.Prior work has shown that rank subsetting can be enabled by a bridge chip at low cost <cit.>.On the software side, the OS page allocator must be informed about the additional physical pages available in DRAM, and allocation decisions must now take the reliability of a physical page and the required reliability of applications into account; we consider such changes to be beyond the scope of this work.CREAM does not require any changes to the virtual memory management within the processor, or to applications executing within DRAM. §.§ Correction-Free Memory RegionsIn conventional ECC DRAM, even when correction is not required, each read or write command fetches 72 bytes (64 bytes of data and 8 bytes of ECC information) to the memory controller as before. By disabling the ECC in the memory controller, the fetched ECC information is simply ignored. In such a scenario, disabling ECC only brings minimal latency benefits (avoiding the short ECC decoding latency), and does not provide any additional DRAM capacity.In CREAM, we instead propose to expose this capacity so that applications can use it to store more data in DRAM. We next discuss several alternatives to organizing the data when this capacity is exposed.§.§.§ Solution 1: Packed Data Layout We first try a naive approach to utilizing the extra space available, which we call the packed data layout.Since the newly-available capacity exists on the DRAM chip that used to store the ECC data (Chip 8 in Figure <ref>), our goal is to simply pack additional data pages into this chip, keeping the layout of existing physical pages untouched.As we shall see, this approach requires no modifications to existing ECC DRAM. Figure <ref> illustrates how we use the extra space. This entire figure shows the data layout for the first DRAM row (i.e., Row 0) of each bank. Each column of the table represents a single chip. Each entry in the table shows the physical page number of the data stored in the corresponding chip and bank. Note that the data layout for Pages 0–7 remains the same as the baseline (Figure <ref>, top). As was the case before, each cache line in these pages is striped across Chips 0–7, such that the entire page can be stored in one row of a bank across the first eight chips (e.g., Page 0 is stored only in Row 0 of Bank 0).The extra page within this DRAM row, however, is only stored within Chip 8, instead of being striped across eight chips, as Chip 8 is the only vacant chip.As Figure <ref> shows, we break extra Page A into eight parts, and distribute each of these parts across all eight banks.[If we were to instead distribute the parts of one page across several rows within a single bank, multiple accesses within a page could incur row buffer misses.] Unlike Pages 0–7, where each cache line is striped across multiple chips, the cache lines of Page A are instead kept within a single bank.Access Latency: Recall from Section <ref> that a single read operation reads data from all nine chips, retrieving 72 bytes of data over eight data bursts.As was the case in the baseline, when reading a cache line from Pages 0–7, only a single read operation is required.In this case, the data retrieved from Chip 8 is simply ignored, as it belongs to some part of Page A.[It is possible to cache the data from Page A in thememory controller and hope that it will be accessed in the near future.We donot add such a cache, as we expect that this data, which resides in a different OS page, is unlikely to be used within a short timespan.]In contrast, reading a cache line from one of the packed extra pages, such as Page A, now requires eight back-to-back read operations, as each read operation only retrieves 8 bytes (i.e., 8 bits/burst) of useful data from Chip 8. As all of the cache line from the extra Page A is stored within a single bank, there continues to be at most one row miss, as once the row is activated, all eight read operations go to different columns within the same row. All write operations must now be performed as read-modify-writes (i.e., data must now be read first into the memory controller and modified there before making changes to DRAM).This is because writes also continue to access alleight chips in parallel.For example, when we write a cache line in Page 0, 8 bytes belonging to Page A is also overwritten.Therefore, we must first read the data from Page A into the memory controller with a single readoperation, so that we write back the same data to Page A (thus leaving Page A's contents unmodified).A write to a cache line in Page A requires eight write operations, for the same reason that multiple read operations were required.Parallelism: While the number of banks remains unchanged between the baseline ECC DRAM and Solution 1, the degree of memory-level parallelism maydrop slightly.Requests to extra pages have a longer occupancy within DRAM, reducing the overall request throughput.In conclusion, our packed data layout exposes additional data capacity without modifying the ECC DRAM DIMM, but the high latency to extra pages and for write operations may negate the effects of added capacity. §.§.§ Solution 2: Rank SubsettingWhile Solution 1 (packed data layout) enables us to utilize the ECC chip capacity, it has two major drawbacks that may result in performance degradation andincreased energy consumption.First, writing data to any page now requires a read-modify-write.Although writes are not usually on the critical path, the added write latency can still delay subsequent reads that are on the critical path, andalso increases energy consumption.Second, an access to the extra page within Chip 8 can disrupt the row buffer locality of accesses to a regular page within Chips 0–7, even though the data for these two pages resides in completely different chips.This is a limitation of the fact that all chips within a rank are wired to operate in lockstep.To reduce unnecessary data transfers and reduce DRAM energy, we employ rank subsetting, which separates the nine chips within a rank into two subsets, similarto prior work on mini-ranks <cit.>. Each rank subset can be controlled independently, and thus can access different addresses in parallel. Within each subset, the chips continue to operate in lockstep.Chips 0 to 7 form an x64 rank subset (i.e., the subset delivers 64 bits of data during each data burst).An x64 rank subset operates the same as a conventional non-ECC DIMM.Chip 8 forms its own x8 rank subset, which has an 8-bit bus width. An x8 rank subset still requires eight DRAM accesses (or 64 bursts) to fetch a cache line split across eight columns in a row, the same as in Solution 1. Rank subsetting is enabled using a small bridge chip on the DRAM DIMM, which can control chip enable signals based on which subset is currently being accessed <cit.> (we discuss this further in Section <ref>).Note that we continue to use the data layout from Solution 1 (Figure <ref>).Access Latency: Compared to Solution 1, rank subsetting allows us to eliminate reading from or writing to data other than the cache line being operated on, as only the subset of chips containing the cache line data is enabled during a memory operation. As a result, it eliminates the need to perform read-modify-writes for every write request, as the chips containing unmodified data are simply disabled. Note that while this solution eliminates all redundant data transfer, each read request to the extra page (i.e., the x8 rank subset) still requires eight accesses. Parallelism: Rank subsetting allows us to access both subsets in parallel.As the two subsets are now decoupled from each other, a request to Chip 8 no longer disrupts the row buffer locality within Chips 0–7. However, since requests to the x8 rank subset (i.e., Chip 8) still require eight read/write operations, the bank-level parallelism is not doubled as a result of rank subsetting.In conclusion, adding rank subsetting to our packed data layout eliminates the need for read-modify-writes with the assistance of a small bridge chip on the DIMM, reducing the number of additional accesses. However, reads to the extra pages still incur a high latency, as they still require eight back-to-back read operations. §.§.§ Solution 3: Wrap-Around Data LayoutWhile rank subsetting in Solution 2 reduces energy consumption by eliminating unnecessary chip accesses, accesses to Chip 8 (i.e., the x8 rank subset) continue to require eight DRAM operations.Assuming that memory accesses are uniform across all pages, the average number of DRAM accesses across all pages increases by 78%.[Smart memory allocation could allocate cold pages (i.e., pages with the least number of accesses) into Chip 8, thus minimizing the total number of extra memory operations. However, this requires software support to identify cold pages, which we do not evaluate in this work.]We propose a new solution, inter-bank wrap-around, that takes advantage of rank subsetting to ensure that every cache line access can now be completed in a single operation.As each chip can still only return 8 bits in each data burst, we must completely rearrange the data layout such that all cache lines, including those in the extra pages, are striped across eight chips. Figure <ref> illustrates how we achieve such a layout, showing the data layout for the first DRAM row of each bank (i.e., Row 0). Each row in the figure represents a DRAM bank within the first DRAM row, and each column of the table represents a DRAM chip.The original mapping of pages across the first eight rows in Bank 0 is shown in the bottom of Figure <ref> for reference.As is the case in the baseline, Bank 0 in our new layout contains Page 0, except for Chip 8.In the baseline, Page 1 mapped to Bank 1, across Chips 0–7.In our new layout, we move the data for Page 1 previously stored in Bank 1, Chip 7 into Bank 0, Chip 8, causing the page to wrap around over two banks. In this data layout, we can modify our rank subsetting logic such that the bridge chip dynamically selects any eight chips to be operated on at a time.Thus, to access Page 1, the bridge chip now opens the first row of Bank 1 in Chips 0–6, as well as the first row of Bank 0 in Chip 8, and does not touch Chip 7.Likewise, as we show in Figure <ref>, we wrap around the remaining pages, allowing us to fit nine pages within eight banks.In this layout, we assign the extra Page A to Chips 1–8 of Row 7, taking up the extra space freed up by wrapping around the eight pages that originally resided in these eight rows.Access Latency: In this data layout, all data is striped across eight chips. Compared to the packed data layout solutions (Solutions 1 and 2), no cache line requires extra memory accesses, and thus memory latency is minimized. Parallelism: Compared to the baseline ECC DRAM, Solution 3 can in fact improve the bank-level parallelism within a DRAM module. Thanks to rank subsetting, each chip can now operate in parallel.In total, there are 8 banks × 9 chips = 72 independently operable bank slices. For Solution 3, each DRAM access requires eight different bank slices to supply data at the same time to eliminate extra accesses (as we discussed in Section <ref>).Since each DRAM row shares the same data layout, the 72 bank slices form nine independent groups, each containing eight bank slices that are always accessed together. Thus, we are able to sustain nine concurrent requests at any time, as opposed to eight in baseline ECC DRAM. For example, the nine pages, Pages 0–7 and A, shown in Figure <ref> can be accessed in parallel.[If we wrap around multiple DRAM rows instead of DRAM banks, the 72 bank slices no longer form nine independent groups thus cannot achieve the same parallelism as we do.]In conclusion, inter-bank wrap-around eliminates all additional operations for memory requests, and increases the bank-level parallelism beyond that of the baseline ECC DRAM.As a result, we expect that inter-bank wrap-around can provide performance benefits over the baseline ECC DRAM beyond the benefits of simply providing extra DRAM capacity.§.§ Detection-Only Memory RegionsSo far, we have proposed solutions that do away with error protection in memory entirely.However, as we discussed in Section <ref>,there are applications that can loosen reliability requirements somewhat, butare unable to tolerate silent data corruption.For such applications, even if we cannot correct the error, simply detecting the error is sufficient.For an 8-bit parity code (which detects one error per data burst, or up to eight errors per cache line), we can still provide 10.7% greatereffective DRAM capacity to applications.To this end, we propose a data layout solution for 8-bit parity.Figure <ref> shows how data is laid out for 8-bit parity. Note that this figure shows the entire bank to simplify the explanation,but that the solution can also be applied to a portion of a bank.In order toreduce the complexity of addressing logic, we base the 8-bit parity solutionon the rank subsetting solution with the packed data layout (Section <ref>).Within a bank, the physical pages that were available already in the baseline ECC DRAM (Pages 0 through n-1) stay in the same position, with each page occupying one row across Chips 0–7.In Chip 8, where space has been freed up from the SECDED codes, we first place parity information.Beyond that, the remaining free space within Chip 8 is used to allocate extra pages, such as Page n, in a packed format (i.e., the page is split across eight rows).As was done in Section <ref>, we employ two rank subsets: one covering Chips 0–7, and the other covering Chip 8.Access Latency: For read requests to the first n pages, two readoperations are performed: one for the data from Chips 0–7, and the other forthe parity data from Chip 8.On a write, since the Chip 8 data contains parity information for other cache lines, a read-modify-writeis again required to avoid modifying the parity information for unmodified cache lines. For extra pages, such as Page n, a read request requires nineoperations to complete, with eight read operations to retrieve the data itself,and a ninth read operation to retrieve the parity code.A write requestrequires eight write operations for the data, and a read-modify writeto save the parity data without changing the parity information for other cache lines.In order to avoid row buffer conflicts when the parity information is read for the extra bits, the parity information for Bank i is saved in Bank (i+4)8, minimizing the probability for spatial locality.Unfortunately, since the parity data is much smaller than the data received from a single chip during a read operation, it is difficult to avoid performing a read-modify-write for the parity data.Currently, each row of parity in Chip 8 contains the parity data for eight pages.Other data layouts, andperhaps layouts for other error detection encodings, can be employed to improve performance, but we leave such studies for future work.§.§ Enabling Adaptive Capacity and Reliability The various solutions for CREAM require relatively simple hardware support. Solution 1 requires modifications only within the memory controller, while Solutions 2 and 3 add simple logic to a bridge chip on the DRAM module. No changes are required inside the DRAM chips.We now discuss these modifications in detail, assuming an initial address space of 8GB on the ECC DRAM module to simplify our explanations.We quantify the overhead in Section <ref>.§.§.§ Memory Controller SupportTo support both ECC and non-ECC data on the same DRAM module, the memory controller stores a boundary between physical pages with conventional layout and those with CREAM layout in a register. This boundary can be used to determine the size of the total physical address space, since it tells us how much extra memory is added from the non-ECC portion.For an 8GB memory, this is8GB+(boundary≫3). The physical pages within the boundary use the CREAM data layout and store non-ECC data. The pages mapped to Chips 0–7 in the CREAM layout (e.g., Pages 0–7 in Figures <ref> and <ref>) are mapped to physical addresses from 0 to boundary. The extra pages (e.g., page A) are mapped to physical addresses ranging from 8GB to the end. Physical pages outside of the boundary use the conventional layout and store ECC data. These pages are mapped to physical addresses between boundary and 8GB. The simple boundary has two benefits: (1) only the address is necessary to identify whether a page has error correction; and (2) as non-ECC pages are arranged at the beginning of the physical address space, the address offset of the extra pages is easy to calculate.Note that for Solutions 2 and 3, the memory controller needs to communicate this boundary with the bridge chip, where the address translation takes place.For Solution 1, all of the logic for CREAM, including address translation logic, is implemented within the memory controller, so the ECC DRAM modules do not require any modification. The memory controller translates each read request to the extra pages into eight back-to-back cache line accesses.The eight accessed addresses, ACC, can be easily obtained from the requested address, REQ: ACC = (REQ-8GB)≪3 + 0/1/.../7. To assemble the requested cache line, the memory controller buffers and combines the partial data from Chip 8 of these eight accessed cache lines within a 64B shift register that we add to the memory controller. The same shift register is reused to stage data during the read-modify-write operation for all pages. These modifications are unnecessary for Solutions 2 and 3. §.§.§ DRAM Module Bridge ChipToday's servers typically use registered memory (RDIMMs), which contain a bridge chip on the DRAM module with logic to buffer the control and addressing information from the memory controller.We propose to add simple circuitry to this existing bridge chip, to support rank subsetting and handle the proposed address translation schemes in hardware.To translate the physical address of each incoming request, the bridge chip takes the requested address sent by the memory controller, and converts it into the rank subset enable signal for each chip and the row address for each rank subset. Thanks to the way that we map the extra pages, when accessing any ECC-protected data, no address translation is required. For Solution 2, the nine chips are statically divided into two rank subsets, and the most significant bit of the requested address determines which subset is activated.Then, the bridge chip, instead of the memory controller, translates the address using the same simple logic as Solution 1.For Solution 3, we form two rank subsets dynamically using eight out of the nine chips, with each subset accessing a different row within the chip. We can determine which eight chips should be used based on the original bank number (i.e., the three least significant bits of the row number): the ID of the chip to be ignored is (8 - BANK_ID).§.§ Hardware OverheadTo determine the overhead of our hardware modifications, we synthesized ourmodifications using Synopsys Design Compiler <cit.>, with an open-source 14nm CMOS cell library <cit.>. We find that the hardware overhead for our various CREAM solutions are very modest.For Solution 1, we evaluated the overhead of the address translation logic that must be implemented within the memory controller. As a baseline, we used the Verilog design of an FR-FCFS memory scheduler <cit.>.The modifications for CREAM increase the area overhead of the memory controller logic by only 2.0%.As a comparison point, the total memory controller logic area comprises only 2.7% of the area of an ARM Cortex-A72 core <cit.>. We also need a 64B register to stage partial cache lines during the read and read-modify-write operations. We find that the logic latency of the memory controller increases by 6.3% over FR-FCFS. Compared to many previously-proposed schedulers, the FR-FCFS memory schedulerhas a much lower latency <cit.>, thus the CREAM Solution 1 scheduler should also be much faster than these other schedulers.For Solution 3, we evaluate the overhead of the logic that we add to the bridge chip. We find that the total area of the additional logic is only 493μm^2, representing less than 0.043% of the total area of an ARM Cortex-A72 <cit.>. The estimated latency of the circuit is 198ps, which is much lower than the 1 DRAM cycle latency (1.5ns in our simulations) that we conservatively use for the bridge chip delay. We need to add 9 chip-select pins and 24 address pins (8 sets of 3 bits, for the LSBs of the different row IDs) to the bridge chip.§ METHODOLOGYSimulation Framework: To quantitatively analyze the performance of CREAM, we implement all three of our protection-free solutions, as well as our detection-only solution, in Ramulator <cit.>, a detailed DRAM simulator. We modify the simulator to accurately model rank subsetting, and we add a one-cycle delay for the simple translation logic (as described in Section <ref>) within the bridge chip.The parameters of the simulated system are summarized in Table <ref>.In our simulations, we emulate the page replacement policy using an active list and an inactive list, similar to that used in a modern Linux virtual memory manager <cit.>. We set the page fault penalty to 500μ s, which includes a 300μ s SSD access latency and a 200μ s software latency <cit.>.Workloads:We evaluate two types of workloads: data-intensive workloads that are sensitiveto memory capacity, and latency-sensitive workloads.For our capacity-sensitive workloads, in addition to the WebSearch workload studied in Section <ref>, we evaluate twoconfigurations <cit.>. We run a synthetic client workload that queriesfor a 20GB dataset at a rate of 2430 queries/second, with the server running four threads. The first workload configuration prevents paging, by setting 's memory usage to 8GB and pinning all of its resident memory in DRAM.The second workload configuration thrashes the physical address space across all our evaluation configurations by setting the memory usage to 10GB. In this configuration, theserver uses more memory space than available on the system, even when CREAM is used, and always triggers page faults. For our memory latency-sensitive workloads, we construct 40 multiprogrammed four-core workloads, using applications from SPEC CPU2006 <cit.> and TPC <cit.>. We classify each application based on its number of last-level cache misses per thousand instructions (MPKI), as has been done in prior work (e.g., <cit.>).Applications with an MPKI greater than 10 are classified as memory-intensive, and all other applications are classified as non-memory-intensive.We sweep over the fraction of memory-intensive applications within each workload, ranging from 0% to 100%.For each category in the sweep, we build eight workloads by randomly selecting memory-intensive and non-memory-intensive workloads.Each application in the workload is run until the slowest application completes 200 million instructions, to ensure that realistic contention is simulated. We quantify multiprogrammed workload performance using weightedspeedup, a commonly-used metric to express multicore workload performance <cit.>.Weighted speedup is calculated as the sum of speedups for each application (vs. a baseline where each application runs without interference).§ EVALUATIONWe now evaluate the performance of CREAM, our proposed mechanism to expose the additional capacity of ECC DRAM when applications don't require strongreliability.We examine seven configurations:*an unmodified ECC DRAM; *a CREAM configuration that uses the packed data layout (Section <ref>); *a CREAM configuration that uses the packed datalayout in conjunction with rank subsetting(Section <ref>); *a CREAM configuration that uses the inter-bankwrap-around data layout in conjunction with rank subsetting (Section <ref>); *our detection-only CREAM configuration with 8-bit parity (Section <ref>); and *a mechanism based on Virtualized ECC <cit.> that provides error correction in non-ECC DRAM by storing SECDED information within some of the physical pages within DRAM, lowering the effective capacity of the DRAM by up to 11.1%. §.§ Capacity-Sensitive Workloads We evaluate the data-intensiveworkloads described in Section <ref>.is typically used as a memory caching layer, which aims to reduce the query traffic to the back-end storage layer <cit.>. However, while increasing the memory capacity of aserver can increase its hit rate in the memory caching layer, and thus reduce the overall percentile latency, we do not evaluate this benefit. Figure <ref> plots the speedup for eachworkload. We first look at the 8GB workload configuration, where no page faults occur in any of the systems that we evaluate. We use this to observe the overhead of CREAM for a data-intensive application. We find that while the overhead foris moderate over , at 17.0%,in fact achieves a slight performanceimprovement (of 0.8%), as its increased bank-level parallelism outweighs the additional latencies. With no effective overheads, we believe that the WebSearch workload used in our motivational studies (Section <ref>) will come close to the average performance of 37.0% reported in those overhead-free studies.In order to understand the aggregate impact of CREAM, combining capacity benefits and all CREAM overheads, we study the 10GB workload configuration for , which generates page faults under bothand CREAM.This workload represents the usage scenario where page faults are already unavoidable in , which can happen due to memory ballooning <cit.> or application behavior. As we see in Figure <ref>, all of the CREAM configurations show large benefits from the added capacity, even when factoring in all overheads. We observe that even for , which has a high overhead in CREAM, the added memory capacity and reduction in page faults easily overcomes this overhead. The best CREAM configuration, , achieves a speedup of 23.0%. , our detection-only CREAM configuration, also sees reasonable speedups of 19.1%, though this is lower than the protection-free configurations due to its smaller increase in DRAM capacity over .We conclude that CREAM is effective at delivering significant performanceincreases for capacity-sensitive applications that do not need ECC protection.§.§ Latency-Sensitive Workloads We now evaluate CREAM on our multiprogrammed latency-sensitive workloads. Unlike , many applications cannot be configured to take advantage of the increased memory capacity, but can still benefit from the increased bank-level parallelism provided by CREAM. For these results, we assume that CREAM has removed all error protection from the DRAM for the CREAM configurations, exposing an additional 12.5% memory capacity. However,no capacity-related benefits are shown in these results, as the workloads are not sensitive to memory capacity. Figure <ref> shows the weighted speedup forand our three CREAM correction-free configurations when the whole DRAM module has no error correction, normalized to theweighted speedup on the y-axis. On the x-axis, each group of bars represents adifferent number of memory-intensive applications within the workload (see Section <ref>).We make four observations from these results:(1)  experiences an average performance degradation of 29.9%; (2)  does better than , but stillhas an average performance degradation of 16.1%; (3) bothandexperience worse performance degradation as the workload memory intensity increases; and (4)  improves system performance by 2.4%, with greater improvements at higher memory intensities. We now examine why we observe these performance trends.Extra Memory Requests: Figure <ref>a shows the number of memory requests issued by the DRAM, normalized to , along the y-axis. The x-axis is the same as in Figure <ref>. We make three observations from these results: (1)  effectively doubles the number of memory requests performed on average over , as a result of its additional read operations and its need for read-modify-write operations; (2)  reduces the percentage of extra requests to an average of 77.2% across all workloads, which corresponds to the elimination of the read-modify-write operations that take place in ; and (3)  eliminates all extra memory requests. This agrees with our expectation from Sections <ref>, asrearranges all of the pages to span across eight DRAM chips. In-DRAM Parallelism: Figure <ref>b plots the average number of concurrent memory requests normalized to , shown along the y-axis. The x-axis is the same as in Figure <ref>. We find that this figure shows similar trends to Figure <ref>.This indicates that in-DRAM parallelism is a major contributor to the performance variation across CREAM configurations.has reduced parallelism because each memory request to data in Chip 8 expands to eight commands, preventing other requests to the same bank from being serviced.reduces parallelism even more, as the read-modify-write operations also require multiple commands per request. In contrast,improves parallelism by 3.1% over , because it fully utilizes all of the independent units on the ECC DRAM to increase the effective amount of bank-level parallelism. Row Buffer Locality: Figure <ref>a plots the row buffer hit rate normalized to , shown along the y-axis. The x-axis is again the same as in Figure <ref>. We make three observations from these results: (1)  reduces the row buffer hit rate by 1.6%, as without rank subsetting, the number of row buffer misses increases, but the eight commands for every request to Chip 8 counteract this by introducing more row buffer hits; (2)  improves the row buffer hit rate significantly, as rank subsetting eliminates the increase in row buffer misses from , but retains the increase in row buffer hits due to Chip 8 requests; and (3)  increases the row buffer hit rate by 2.7%, due to its increased in-DRAM parallelism.Overall, we find that row buffer locality has little impact on performance. Average Memory Latency: Figure <ref>b plots the average memory latency normalized to , shown along the y-axis. The x-axis remains the same as in Figure <ref>. We find that average memory latency is inversely correlated with the performance, and thus is also a major contributor to the variation across CREAM configurations. Unsurprisingly,andexperience high average latencies, as the additional commands per request can delay other pending memory requests.In contrast, the additional parallelism offered byreduces memory contention, translating into shorter request latencies. §.§ Sensitivity Study: Correction-Free SizeSo far, we have assumed that the entire physical memory address space of an ECC DRAM is transformed into correction-free memory.In this section, we study how the performance of CREAM changes as larger portions of the DRAM are set aside for strong error correction (i.e., SECDED).We compare the performance of our CREAM configurations to .CREAM incurs noperformance penalty for SECDED as detection and correction are already implemented within the memory controller.In contrast,requires modifications to the processor's Memory Management Unit (MMU) so it can issue separate memory requests to the SECDED data, and it also utilized space in the last-level cache to store recently-used SECDED data <cit.>.We sweep over the percentage of DRAM reserved for SECDED correction. Figure <ref> plots the weighted speedup, normalized to , along the y-axis. The first six bars in each group show the performance of theconfiguration (as no error correction is required,is the same as ). The remaining six bars show the performance of , the best of our CREAM solutions.We make three key observations from this data: (1) as the memory intensity of the workload increases, the performance ofdecreases, which occurs because uses last-level cache space to store ECC data, increasing the cache contention; (2) as the percentage of DRAM using SECDED increases,performance also drops, as much as 25.1% at our highestmemory intensity; and (3) across all proportions of SECDED-covered DRAM, maintains minimal performance degradation, with the lowest performance drop being only 4.0%.The small performance drops for CREAM occur when there is a balance between the amount of SECDED-covered DRAM and correction-free DRAM (the worst performance occurs at 60% SECDED coverage), because a SECDED-covered cache line destroys row buffer locality for up to two rank subsets that were being used by a correction-free cache line.We conclude that these impacts are minimal, and that even setting aside the performance improvements from CREAM's larger memory capacity, CREAM delivers very low performance impacts when switching between SECDED-covered and correction-free DRAM regions across the entire range of our sensitivity study. § RELATED WORKTo our knowledge, this paper is the first to (1) exploit the ECC storage within an ECC DRAM module as extra memory capacity for applications or memory regions that do not require high reliability, and (2) propose a hardware mechanism to rearrange the data layout in an ECC DRAM module to efficiently exploit the extra memory capacity.We have already compared the performance of our work, CREAM, to a mechanism similar to Virtualized ECC <cit.> in Section <ref>. Virtualized ECC (VECC) uses software to map ECC bits onto non-ECC DRAM modules, providing flexibility between the reliability and capacity provisioned in the memory. We show that VECC can adversely impact performance in some cases, whereas CREAM is much more graceful: the worst-case performance degradation of VECC over using a baseline ECC DRAM module is 25.1%, while CREAM's is less than 4%. In addition, CREAM provides 12.5% extra data capacity in the DRAM module when ECC protection is not required, while Virtualized ECC reduces data capacity by 11.1% when ECC protection is used for all data. Virtualized ECC requires hardware changes to the MMU, as well as OS support to allocate physical pages for ECC storage.CREAM requires hardware changes to only the memory controller and the bridge chip on the DRAM module, and does not require OS support (as it is handled in hardware).There has been a lot of work on providing flexible, efficient, and more powerful ECC protection in DRAM <cit.>, as well as flexible latencies or supply voltage in DRAM <cit.>.None of these works make use of the space reserved for ECC to gain higher capacity. Prior work has proposed in-DRAM ECC correction mechanisms <cit.> (as opposed to correction in the controller). CREAM can potentially be extended for such devices with in-DRAM ECC mechanisms, to exploit the extra capacity dedicated for ECC when the reliability guarantees provided by ECC are not required.Many prior works have proposed to change the data layout <cit.> or use rank subsetting <cit.> on an ECC or non-ECC DRAM module for various reasons. None of these works use either technique to efficiently gain data capacity from the space reserved on an ECC DRAM module for correction codes. § CONCLUSIONECC DRAM, widely used in today's large-scale server systems, adds an extra DRAM chip to each DRAM module to store error-correcting codes required for increased reliability.While some applications or memory regions require the errorprotection offered in ECC DRAM, others do not need error correction.Even though these other applications or memory regions may benefit from additional DRAM data capacity, the extra capacity within ECC DRAM is not available for them, as it is exclusively used for strong errorprotection codes.In this work, we propose Capacity- andReliability-Adaptive Memory (CREAM), a mechanism that exposes the additionalECC DRAM capacity to those applications that do not require errorcorrection.CREAM converts a part of the ECC DRAM space to provide either no correction or lightweight error detection, freeing up space previously used by error-correcting codes for use as additional data capacity within DRAM. We perform experiments with two large-memory workloads, and find that the additional data capacity that CREAM can deliver improves their performancesignificantly.We find that CREAM can deliver this additional data capacity without any significant performance overhead.We conclude that CREAM is a practical mechanism that enables the use of capacity that is otherwise used for error correction in modern ECC DRAM modules for data storage, thereby leading to significant performance improvements and a new capability to efficiently trade off between reliability and memory capacity.ieeetr
http://arxiv.org/abs/1706.08870v2
{ "authors": [ "Yixin Luo", "Saugata Ghose", "Tianshi Li", "Sriram Govindan", "Bikash Sharma", "Bryan Kelly", "Amirali Boroumand", "Onur Mutlu" ], "categories": [ "cs.AR" ], "primary_category": "cs.AR", "published": "20170627141320", "title": "Using ECC DRAM to Adaptively Increase Memory Capacity" }
A simple method for shifting local dq impedance models to a global reference frame for stability analysis Atle Rygg, Marta Molinas, Eneko Unamuno, Chen Zhang and Xu Cai ... =========================================================================================================Impedance-based stability analysis in the dq-domain is a widely applied method for power electronic dominated systems. An inconvenient property with this method is that impedance models are normally referred to their own local reference frame, and need to be recalculated when referring to a global reference frame in a given network. This letter presents a simple method for translating impedance sub-models within a complex network, from their own reference frames to any given point in the network. What distinguishes this method is that by using a simple rotational matrix, it only needs impedance models in their own local reference frames, to be translated to a global reference in the network. By way of this method, standard circuit analysis rules for series and parallel connection are applicable, as proven in the letter. The method isdefined and validated for impedances in the dq and modified sequence domains, and it is shown that the dependency on reference frame is marginal in the latter. An additional finding from the application of this method is that components or subsystems with a certain symmetry property called Mirror Frequency Decoupling are invariant to the choice of reference frame.The method is illustrated and validated by comparing analytical calculations with a frequency sweep in MATLAB Simulink. dq-domain, Impedance Modeling, Power Electronic Systems, Modified Sequence Domain, Stability Analysis.§ INTRODUCTIONSmall-signal stability analysis of power electronic systems is often conducted by the impedance-based analysis <cit.>. Previous works have developed impedance models of power electronic converters by various techniques, e.g. harmonic linearization in the sequence domain <cit.> and dq-domain modeling <cit.>. Recently, other methods proposed the modified sequence domain modeling <cit.> and phasor-based impedance modeling <cit.>.The dq-domain method, modified sequence domain method and phasor-based method all share the same challenge when applied to systems with multiple units - the impedance matrices are referred to a certain reference frame or phase angle. In other words, they are referred to a certain local point in the network. Consequently, when performing system-level analysis, one must ensure that all submodels are referred to the same (global) reference frame. So far, this challenge has only been addressed by a few papers performing dq-domain based analysis. In <cit.> and <cit.>, dq-domain stability analysis is performed to a system composed by multiple converters. The challenge of local vs. global reference frame is addressed by introducing a rotation matrix in a similar way as in the present paper, with the difference that their rotation matrix is integrated into the converter models by a case specific method, and the same method is not applicable to an arbitrarily given network. The resulting expressions for source and load subsystem impedance matrices are also very complex, and the complexity increases drastically for larger systems.The present letter proposes a simple method in which the alignment is achieved by a rotation matrix based on load flow information. The method enables the use of impedance models in their own local reference frame without any knowledge of the internal structure. When all models are referred to the global reference frame, standard circuit analysis rules with impedance matrices can be applied. In this way, by referring all impedance matrices to a global reference frame, series and parallel connection rules are applicable. The letter also reveals that:* Sub-blocks or subsystems that satisfies the definition of a Mirror Frequency Decoupled (MFD) system <cit.> are invariant to the choice of reference frame. * In the dq-domain all four matrix elements are affected by rotation, while in the modified sequence domain, only the angles in the off-diagonal elements are affected. The proposed method makes dq- and modified sequence domain impedance analysis applicable to systems of any scale. The method has low complexity and is simple to use. The letter is organized with the method presented in section <ref>, while it is applied to a case study system in section <ref>. The method is validated by comparing the impedance matrix from analytic calculations with a frequency sweep from a MATLAB time-domain simulation. Important mathematical derivations as well as parameter values are included in the appendix.§ METHOD DESCRIPTION The method is explained based on the flowchart in Fig. <ref>, each step is explained in the following subsections. §.§ Define global reference frameThe first step in the proposed framework is to select a node which all impedance matrices shall be referred to. Any node can be selected, and the choice will not affect the stability analysis. If the selected stability analysis is based on source and load impedance equivalents, a logical choice of global reference frame is the source/load interface point. §.§ Run power flow calculationRunning a power flow calculation will provide the steady-state operation point of the system. This is required for two purposes. First, analytical impedance models, e.g. the ones derived in <cit.> are dependent on the operation point. This is due to non-linearities in the converter controller that requires linearization around the operation point. Second, the method proposed in this letter requires the information of the fundamental voltage angle at each node in the system, defined as θ_1,θ_2...θ_n in Fig. <ref>. This is also highlighted in the case example in Fig. <ref>. The phase angle at the global reference frame node is defined as θ=0. §.§ Obtain local impedance models of componentsPower electronic converters (and other units) are represented by local terminal equivalents in impedance-based analysis. The terminal equivalent is defined by the impedance model in this letter, as the analysis is based on transfer matrices between current and voltage, similar to <cit.>. The impedance models can be obtained either from analytical models, or from measurements. Manufacturers can provide these models from e.g. factory tests.It is important to emphasize that the models of each component are normally referred to their local terminal point, indicated by superscript l in this letter. Referring the sub-models to a global reference frame is explained in subsection <ref>.Referring to the case example in Fig. <ref>, one must obtain the impedance models of each converter 𝐙_c1^l...𝐙_c4^l and also information of other network components, e.g. impedances 𝐙_1...𝐙_4 and 𝐙_th. Bold 𝐙 indicates that these impedances are 2x2 matrices. §.§ Does component satisfy MFD?A Mirror Frequency Decoupled (MFD) system was defined in <cit.>, and this definition is useful when discussing local vs. global reference frames. It is shown in appendix <ref> that MFD systems are not dependent on the reference frame, i.e. they are rotational invariant. A list of typical power system components and control blocks categorized as MFD and MFC is presented in TABLE <ref>. Generally, power electronic converters are not MFD due to control blocks such as PLL and DC-link voltage controllers. Passive components are always MFD. §.§ Align models with global reference frameIf a component does not satisfy the MFD condition, its impedance model will depend on the reference frame. The needed alignment from local to global reference frame is given by the following relations derived in appendix <ref>: 𝐙_dq^g = 𝐓_dq𝐙_dq^l𝐓_dq^-1𝐙_pn^g = 𝐓_pn𝐙_pn^l𝐓_pn^-1 𝐓_dq = [cosθ_isinθ_i; -sinθ_icosθ_i ]𝐓_pn= [e^jθ_i 0; 0 e^-jθ_i ]where subscript dq denotes a dq-domain model, while pn denotes a modified sequence domain model. Superscript g denotes global reference frame, while l denotes local reference frame. The angle θ_i is the angle between component i local reference frame and the global reference frame defined in section <ref>.Of note, all four matrix elements are affected by rotation in the dq-domain, while only the angles of the off-diagonal elements are affected by rotation in the modified sequence domain.A special case is when there is no power flow in the system (no load case). Then, all voltage angles are equal, and no rotation is needed.§.§ Perform analysis Once impedance models are obtained for all network components in the system, and referred to the global reference frame, the stability analysis can be conducted by various methods. A common approach is to apply the Generalized Nyquist Criterion (GNC) to the source and load impedance equivalents. This was first applied to power electronic systems in <cit.>. See <cit.> and <cit.> for examples of GNC applied to power electronic systems. When applying GNC or other analysis methods, it is important that all submodels are aligned with the same reference frame. As an example, (<ref>) presents how a equivalent impedance of a four converter subsystem is calculated by series and parallel connection. § VALIDATION BY SIMULATIONA network example is presented in Fig. <ref>. The system includes four grid-connected converters connected in a radial structure. Details on the converter model is given in Appendix <ref>. The analytical model provides the impedance matrices 𝐙_c1^l...𝐙_c4^l shown in Fig. <ref>.The load subsystem total impedance can be found by series and parallel connection as: 𝐙_load = (𝐙_1+𝐙_c1^g)||(𝐙_2+𝐙_c2^g) ||(𝐙_5+(𝐙_3+𝐙_c3^g)||(𝐙_4+𝐙_c4^g]))where all converter impedances are referred to the global reference frame by applying the relations in (<ref>) to the local impedance models. This is indicated by superscript g. Note that the line impedances do not require alignment with global reference frame as they satisfy the MFD condition. The comparison between the analytical calculation from (<ref>) and a frequency sweep in MATLAB simulink is presented in Fig. <ref> for the dq-domain and Fig. <ref> for the modified sequence domain. The perturbation injection method is applied at the interface point in Fig. <ref>. It is clear that the impedances resulting from the analytic calculation have an exact match with the ones resulting from the frequency sweep,validating the relations in (<ref>) and by that the method for referring impedance matrices from local to a global reference frame.§ CONCLUSIONThe letter has presented a simple method to ease the applicability of impedance-based analysis in the dq-domain and modified sequence domain, at different points in a given complex network. By a simple rotation matrix, the different impedance models in the network can be moved from their local reference frames to a global reference frame within the network. The decoupled nature of the method prevents the need to integrate the rotation factor into the local impedance model and by that greatly simplifies the system analysis referred to a global reference frame. A secondary result of this method is that by way of the proposed rotation matrix, aggregate impedances can be easily calculated based on impedance sub-models referred to their own local terminals. The method then allows the use of standard series and parallel connection rules, which makes system analysis significantly easier. It is found that in the dq-domain all four impedance elements are affected by rotation, while in the modified sequence domain only the off-diagonal elements are affected. Furthermore, it is shown that a Mirror Frequency Decoupled (MFD) system is invariant to rotation, and can therefore be used directly in series and parallel connection.IEEEtran§.§ dq symmetric systems and mirror frequency decoupled systems In <cit.> a system was defined as Mirror Frequency Decoupled (MFD) if there is no coupling between s+jω_1 and s-jω_1, that is Z_pn=Z_np=0. It can be shown that this property is equal to a dq symmetric system as defined in <cit.>. This condition is given by:Z_dd = Z_qq Z_dq=-Z_qd It was shown in <cit.> that in a MFD system the original sequence domain impedance defined in <cit.> will give identical stability analysis as the dq-domain analysis.§.§ Derivation of rotation matricesRotating a dq-domain voltage (or current) vector with an angle θ is achieved by the following relation <cit.>:[ V_d; V_q ]_rot =[cosθ -sinθ;sinθcosθ ][ V_d; V_q ] =𝐓_dq[ V_d; V_q ]where the rotation matrix is defined as 𝐓_dq. As the same equation is valid for the current vector, the dq-domain impedance matrix is rotated as:𝐙_dq,rot=𝐓_dq𝐙_dq𝐓_dq^-1 By applying the impedance transform from <cit.>, the rotation matrix 𝐓_pn in the modified sequence domain is found as:𝐓_pn =𝐀_z 𝐓_dq𝐀_z^-1 = 1/2[1j;1 -j ][cosθ -sinθ;sinθcosθ ][11; -jj ]=[e^j θ0;0 e^-j θ ] Expanding this rotation matrix yields:𝐙_pn,rot = 𝐓_pn𝐙_pn𝐓_pn^-1 =[ Z_ppZ_pne^j2θ; Z_npe^-j2θ Z_nn ] It is seen from (<ref>) that only the angle in the off-diagonal elements Z_pn and Z_np are affected by rotation in the modified sequence domain. As these elements are zero for a MFD-system (appendix <ref>), a MFD system is rotational invariant in the modified sequence domain. As a consequence, the dq-domain impedance matrix is also invariant to rotation for a MFD system.§.§ Case study data The converter model used in the case study is presented in Fig. <ref>. The current controller is a standard PI-controller H_c(s) = k_p + k_i/s, while the PLL is a standard synchronous reference frame PLL based on a PI-controller H_PLL(s)=k_p,pll+k_i,pll/s. Numerical data is given in table <ref>. The set-points for each current controller are given in Table <ref>. The impedance model for each converter is based on the model derived in <cit.>, and repeated in (<ref>). Expressions for each transfer matrix in (<ref>) are omitted here but can be found in <cit.>. The corresponding modified sequence domain model is obtained by the transform derived in <cit.>.𝐙_c,dq= (Z_out^-1+G_idG_del([-G_ci+G_dei]G_PLL^i + G_PLL^d]K)^-1·(I+G_idG_del[G_ci -G_dei] K)The line impedances are given as per unit values in Fig. <ref>, and the per unit base Z_base is given in Table <ref>. The X/R-ratio is 10 for each impedance.
http://arxiv.org/abs/1706.08313v1
{ "authors": [ "Atle Rygg", "Marta Molinas", "Eneko Unamuno", "Chen Zhang", "Xu Cai" ], "categories": [ "cs.SY" ], "primary_category": "cs.SY", "published": "20170626103951", "title": "A simple method for shifting local dq impedance models to a global reference frame for stability analysis" }
This paper includes PDF tooltips for citations. Please hover the mouse over the > symbol after citations to see a tooltip with bibliographic information. [email protected] Institute for Theoretical Physics, Regensburg University, D-93053 Regensburg, Germany [email protected] Institute for Theoretical Physics, Regensburg University, D-93053 Regensburg, Germany Institute for Theoretical and Experimental Physics (ITEP), 117218 Russia, Moscow, B. Cheremushkinskaya str. 25We present first-principle lattice study of the two-dimensional SU(N) × SU(N) Principal Chiral Model (PCM) on the cylinder ℝ× S^1 with variable compactification length L_0 of S^1 and with both periodic and Z_N-symmetric twisted boundary conditions. For both boundary conditions our numerical results can be interpreted as signatures of a weak crossover or phase transition between the regimes of small and large L_0. In particular, at small L_0 thermodynamic quantities exhibit nontrivial dependence on L_0, and the static correlation length exhibits a weak enhancement at some “critical” value of L_0. We also observe important differences between the two boundary conditions, which indicate that the transition scenario is more likely in the periodic case than in the twisted one. In particular, the enhancement of correlation length for periodic boundary conditions becomes more pronounced at large N, and practically does not depend on N for twisted boundary conditions. Using Gradient Flow we study non-perturbative content of the theory and find that the peaks in the correlation length appear when the length L_0 becomes comparable with the typical size of unitons, unstable saddle points of PCM. With twisted boundary conditions these saddle points become effectively stable and one-dimensional in the regime of small N L_0, whereas at large N L_0 they are very similar to the two-dimensional unitons with periodic boundary conditions. In the context of adiabatic continuity conjecture for PCM with twisted boundary conditions, our results suggest that while the effect of the compactification is clearly different for different boundary conditions, one still cannot exclude the possibility of a weak crossover separating the strong-coupling regime at large N L_0 and the Dunne-Ünsal regime at small N L_0 with twisted boundary conditions. Lattice study of continuity and finite-temperature transition in two-dimensional SU(N) x SU(N) Principal Chiral Model S. N. Valgushev June 27th, 2017 =====================================================================================================================§ INTRODUCTIONThe innovative idea of resurgent trans-series has recently allowed to better understand the structure of perturbative expansions for asymptotically free quantum field theories, such as four-dimensional gauge theories and two-dimensional sigma models. In particular, for two-dimensional sigma models resurgent trans-series provide a precise interpretation of the factorial non-Borel-summable infrared renormalon divergences of perturbative series <cit.> in terms of saddle points of the classical action, even if they are non-topological, unstable and/or complex-valued.However, at present this interpretation of infrared renormalon divergences can be explicitly worked out only for quantum field theories with compactified spatial direction ℝ^d-1× S^1, in which the fields satisfy certain twisted boundary conditions. The compactification length L_0 should be sufficiently small with N L_0 Λ≪ 2 π, where Λ is the dynamically generated energy scale and N is the rank of the symmetry group. In this limit the theory is in the weakly coupled regime while still exhibiting non-perturbative features such as dynamically generated mass gap. This defines the so-called Ünsal-Dunne regime, which allows for explicit construction of trans-series.The so-called continuity conjecture states that the Ünsal-Dunne regime at small L_0 is analytically connected to the strongly coupled regime at large L_0 <cit.>, in which boundary conditions become irrelevant and the physics is equivalent to the low-temperature phase with periodic boundary conditions. This conjecture is based on the observation that the physical properties of gauge theories and sigma models appear to be qualitatively very similar in both Ünsal-Dunne regime and in the genuine strong-coupling regime at small temperatures or large compactification length.The continuity conjecture is closely related to the Eguchi–Kawai (EK) reduction in lattice field theory <cit.> where the full theory is suggested to be equivalent to twisted single site model in the large N limit. It is known that the original EK reduction without twist <cit.> does not work due to spontaneous breaking of the center symmetry ℤ_N^d <cit.>. Considering EK reduced model as a result of continuous dimensional reduction from large to very small lattices it is evident that this symmetry breaking is manifestation of deconfinement phase transition happening when the torus is sufficiently small L_0 Λ∼ 1 <cit.>. A possible solution is to introduce the twisted boundary conditions which preserve center symmetry and prevent it from spontaneous breaking, thus suppressing the deconfinement transition and allowing for analytic (or volume independent) connection between the regimes of small and large L_0. However, lattice simulations of twisted EK reduced model indicate that spontaneous symmetry breaking can still occur and pose the question of the existence of the continuum limit <cit.>, although it seems that these difficulties can be overcome <cit.>. Possible ways of stabilizing center symmetry in gauge theories are the special deformations of the gauge action <cit.> or the introduction of adjoint fermions <cit.>, which effectively induce center-preserving holonomies along the compactified directions. These ideas were important to formulate the continuity conjecture in PCM.In two-dimensional sigma models the prescription for ℤ_N-preserving twist Ω∈ SU(N) reads as:Ω^n =N, n ≡ 0modN 0,.This operator either projects excited states out or provides a phase shift which leads to mutual cancellations between distinct states in the partition function. Since a lot of excited states do not contribute at all to the twisted partition function <cit.>, one can hope that the deconfinement transition is eliminated <cit.>. However, to turn continuity conjecture into a precise statement, one should demonstrate that no phase transition or crossover occur as the compactification length L_0 changes from large values N L_0 Λ≫ 2 π to small values with N L_0 Λ≪ 2 π. At present a rigorous analytic demonstration of this fact is still lacking due to the absence of reliable analytic methods for strongly coupled quantum field theories. Notable exceptions are the exactly solvable large-N ℂℙ^N-1 and O(N) non-linear sigma-models for which an explicit demonstration has been worked out <cit.>. However, for Principal Chiral Model (PCM) which is especially interesting due to its matrix-like planar limit very similar to that of QCD the problem clearly calls for first-principle simulations.Unfortunately, not much is known about thermodynamic properties of PCM in general, although this model is integrable and many exact results can be obtained using bootstrap techniques <cit.>. One of the reasons is that for PCM there is no obvious local parameter which can be used to characterize the “deconfinement” phase transition, rendering analytic and lattice studies very difficult. Recently a thermodynamic Bethe ansatz has been proposed in order to investigate thermodynamic properties <cit.>, however without definite conclusions so far.In this work we test continuity conjecture for the two-dimensional SU(N) × SU(N) PCM using first-principle Monte-Carlo simulations. We study several characteristic quantities such as static correlation length, mean energy and specific heat and demonstrate that they exhibit qualitatively different dependence on the length of the compact direction L_0 with periodic (Section <ref>) and twisted (Section <ref>) boundary conditions. In both cases we find some evidence for a transition/crossover which however posses very different features: while for periodic boundary conditions this might be a finite-temperature transition (probably similar to deconfinement in QCD), for twisted boundary conditions this is at most a crossover with respect to the combined length parameter ρ≡ N L_0.Furthermore, in Section <ref> we use Gradient Flow <cit.> to evolve the field configurations sampled by Monte-Carlo process towards the saddle points of the classical action, and demonstrate that the resulting “almost classical” field configurations feature localized non-perturbative objects which resemble the uniton and fracton saddle points known for continuum PCM <cit.>. Twisted boundary conditions stabilize those saddle points and, as expected, lead to the emergence of effective topological sectors <cit.>. We also find that geometric properties of non-perturbative objects strongly change in the region of the possible phase transition or crossover for both boundary conditions.§ SIMULATION SETUP AND OBSERVABLESThe lattice action of the two-dimensional SU(N)× SU(N) PCM can be written as:SU = -2β N ∑_,iUU^† + _i,where β≡λ^-1 = 1/(g^2 N) is an inverse of the t'Hooft coupling λ and _i is a unit lattice vector in direction i. We have used lattices of the size L_0 × L_1 with boundary conditions (BC) defined as:Ux_0+L_0,x_1 = Ω_0 Ux_0,x_1Ω_0^†, Ux_0, x_1+L_1 = Ux_0,x_1,where matrix Ω_0 determines the type of boundary conditions:Ω_0=I, Ω, .The twist matrix Ω has the following form:Ω=e^iπ/Nνdiag{ 1, e^i 2π/N, ⋯ , e^i 2πN-1/N},where ν=0,1 for N odd, even. It is easy to see that Ω satisfies the equation (<ref>).We employed the standard Cabbibo-Marinari algorithm <cit.> in order to stochastically sample field configurations U() according to Boltzmann weight exp-SU with the action (<ref>). One Monte-Carlo update of the field configuration was implemented by applying the heat bath algorithm to all SU(2) subgroups of all U() matrices. Each Monte-Carlo step was followed by overrelaxation step <cit.> in order to decrease auto-correlation time. Error analysis was carried out using jacknife and bootstrap techniques. In order to test our code, we have reproduced several data points from the previous PCM simulations of <cit.> with very high precision.We performed calculations for β = 0.332, N=6,9,12,18 and spatial lattice sizes L_1 = 108 and, for N = 18, L_1 = 200, which we found to be sufficiently large compared to zero-temperature static correlation length taking values in the range ξ_0 = 10 … 12 (for different N) in our simulations. For each value of N and type of boundary conditions we have simulated at multiple values of the compactification length L_0 in the intervals 1 ≤ L_0 ≤ L_1 for periodic and twisted boundary conditions, respectively. In order to study basic thermodynamic properties, we have computed the mean energyE = 1 - 1/4 N^2∂ F/∂β = =1 - 1/N⟨UU^†+_i⟩and specific heatC = 1/Nd E/d g^2 .In order to compute the static correlation length we rely on the observation that in the weak-coupling regime for sufficiently small values of lattice momenta as compared to the dynamically generated mass gap the Fourier transform of the group invariant correlation function G() can be with a good precision approximated by the free scalar propagator <cit.>:G̃() ≃Z_G/M^2 + 4 sin^2k_0/2 + 4 sin^2k_1/2,where Z_G is the wave-function renormalization constant. From this equation one obtains an expression for static correlation length ξ in terms of the momentum-space correlator at the two lowest values of lattice momenta:ξ^2 = 1/4sin^2 π/LG̃(0,0)/G̃(0,1)-1.Alternatively, correlation length can be computed with the standard exponential fits of wall-wall correlators and we have checked that numerical values for ξ obtained in both ways agree with a high precision.Similarly to the two-dimensional lattice gauge theory <cit.>, lattice PCM undergoes the large N phase transition from strong to weak coupling phase at β_c = 0.305 <cit.>, which in this case is a second-order transition at which the specific heat diverges and the distribution of eigenvalues λ = e^iϕ of the link matrices U U^†+_i develops a gap. Continuum scaling of different observables such as mass gap already sets up in the vicinity of the critical coupling β≳β_c <cit.>. For our simulations we have chosen the value β = 0.332 sufficiently deep in the weak coupling phase. To check that with this value of coupling we are sufficiently to the continuum limit, we have observed the asymptotic scaling of zero-temperature static correlation length ξ_0 (measured on lattices with L_0 = L_1 for N ≥ 9) in the so-called β_E scheme <cit.>:ξ(E) = 0.991/16π e^2-π/4√(E) e^π/E .While the constraints in CPU time available for our simulations have forced us to work at a single value of β, in future work it would be also desirable to check the continuum scaling of correlation length and thermodynamic observables by performing simulations at several values of β.In order to exclude possible large N phase transition to physically irrelevant strong coupling phase as the compactification length varies in the range L_0 = 1 … L_1, we have studied angle distribution of eigenvalues of link matrices U U^†+_i and found that it remains gapped and almost unchanged for all directions i and boundary conditions at all values of L_0. This suggests that finite temperature transition which we discuss below has different nature and seems to be a feature of continuum theory.In order to better visualize important properties of our physical observables we have normalized them with respect to the corresponding zero temperature values at periodic boundary conditions and the same N:O_0 ≡ O(L_0 = L_1, PBC),where O is some physical observable, and depict the relative change rather than the value itself:Δ O(L_0)/O_0≡O(L_0) - O_0/O_0.This normalization is motivated by the fact that all observables admit non-trivial 1/N corrections which render straightforward comparison at different N not very illustrative. Note also that identical normalization allows to conveniently compare the physics for both types of boundary conditions. We have also performed fits to the lowest-order 1/N expansion:O(L_0,N) = Õ(L_0) + c_1 / N^2in order to estimate infinite N values Õ(L_0), which are also presented on our figures. Very high precision of the numerical data for the mean energy also allowed us to include the terms of order 1/N^4 into the fitting function (<ref>) for this observable. In order to calculate the relative changes (<ref>) in the limit N →∞, we perform separate large-N extrapolations of the form (<ref>) both for O_0 ≡ O(L_0 = L_1, PBC) in the denominator and for the finite-temperature value O(L_0) in the numerator, which typically results in a smoother extrapolation. Furthermore, when plotting the data points as functions of the compactification length L_0, we express L_0 in units of zero-temperature static correlation length ξ_0 calculated with the same N as the data. This rescaling should allow to compare our current data with prospective simulations at other values of coupling as well as with calculations for continuum PCM.§ PERIODIC BOUNDARY CONDITIONS AND THE FINITE-TEMPERATURE “DECONFINEMENT” TRANSITIONTo study possible transition between low-temperature and high-temperature regimes, which correspond to large and small compactification length L_0 with periodic boundary conditions, on Fig. <ref>(a) we first illustrate the dependence of the static correlation length ξ on L_0. At all values of N the correlation length stays almost constant for large values of L_0, then exhibits statistically significant growth in the range L_0/ξ_0 = 3 … 5, and finally decreases for smaller L_0, seemingly reaching some finite value at L_0 → 0. The position of the maximum of correlation length slowly shifts to smaller values when N is increased. For the data points extrapolated to infinite N the peak lies approximately at L_0^c/ξ_0 ≈ 3.5. An important point to stress that in contrast to e.g. CP^N sigma model, for the principal chiral model the transition temperature remains finite in the large-N limit, as also indicated by the recent Diagrammatic Monte-Carlo simulations directly in the large-N limit <cit.>. This property is also expected for large-N gauge theories <cit.>.At the same time, the maximal value of the peak shows tendency to grow with N and its width seems to decrease. Extrapolations to the large-N limit using the fits of the form (<ref>) at fixed values of L_0 suggest that the maximal value of ξ remains finite in the large N limit. Furthermore, even upon the large-N extrapolation the enhancement of the correlation length appears to be very weakly pronounced: the relative change in ξ is of order of 5%. Simulations at larger volumes (L_1 = 200) also reveal a rather small (∼ 2 %) enhancement of ξ with volume, see the left plot on Fig. <ref>. The emergence of such a peak structure and clearly distinct behaviour of correlation length at small and large L_0 are suggestive of a finite-temperature phase transition or crossover. This transition also manifests itself in the distinct behaviour of thermodynamic observables at low and high temperatures. In particular, both the mean energy E(L_0) and the specific heat C take almost constant values when L_0 > L_0^c and then decrease in the region L_0 < L_0^c, see Fig. <ref> for illustration.However, not much is known on the nature of this transition. The observed scaling with N and L_1 suggest that the finite-temperature transition is rather weak, but cannot completely distinguish between the weak phase transition and crossover. A more detailed combined study of the finite-volume and finite-N scaling is required to make a definite conclusion about the order of this finite-temperature transition, which we leave for future work. A recent Diagrammatic Monte-Carlo study <cit.> at N →∞ limit also indicated the relatively weak enhancement of correlation length at the same critical value of L_0, but did not completely exclude the possibility of divergent correlation length at the transition point.On the general grounds in analogy with other asymptotically free theories one might expect a “deconfinement” phase transition associated with effective liberation of SU(N) degrees of freedom at sufficiently high temperature. For gauge theories, the deconfinement transition is typically associated with the breaking of the global Z_N^d center symmetry, with Polyakov loop being the local order parameter. In contrast, for principal chiral model even an approximate local order parameter which would allow to distinguish the “confinement” and the “deconfinement” phases is not known. In principle, any kind of phase transition should result in a non-analytic behavior of the free energy ℱ∼ -ln𝒵, which for the deconfinement transition in PCM is expected to behave as <cit.>lim_N →∞ℱ/N^2 ∼ 1, L_0 Λ≪ 1 , 0L_0 Λ≫ 1 .While a direct calculation of the free energy is nontrivial in Monte-Carlo simulations, first-order transitions typically result in a characteristic double-peak structure of the action probability distribution with unequal peak heights <cit.>. Our numerical data for the mean energy does not exhibit any double-peak structure, which disfavours the first-order phase transition scenario (although not excluding it completely, as one might need very high statistics to distinguish the two peaks).In the context of large-N volume independence our results suggest that correlation length, mean energy and specific heat do not depend on the lattice size as long as it is much larger than the typical correlation length, in close analogy with large-N gauge theories <cit.>. The deviations from volume independence only become significant in the vicinity of the transition point or crossover.§ TWISTED BOUNDARY CONDITIONS AND THE TRANSITION TO ÜNSAL-DUNNE REGIMEWe start the discussion of the principal chiral model with twisted boundary conditions (<ref>) by presenting our results for the static correlation length (<ref>) on Fig. <ref>(b). If we plot ξ as a function of L_0, we again clearly see two distinct regions separated by a peak of ξ: when L_0 is large the correlation length ξ(L_0) coincides with zero-temperature value ξ_0 (<ref>). At intermediate values of L_0 ξ exhibits a statistically significant growth, and in the region of small L_0 it decreases again, finally reaching some finite value. Note that the maximal relative change of ξ is of the same order as in the case of periodic BC: approximately 5%. However, apart from the existence of the peak, the dependence of correlation length on L_0 and N seems to be very different from the case of periodic BC. In particular, the peak height does not depend on N within statistical errors, and its position shifts to smaller L_0 as N is increased. The fits of the form (<ref>) have rather poor quality for data points with fixed L_0.Let us now recall that due to the properties (<ref>) of the twist matrix the group-invariant correlation function are periodic on the cylinder ℝ× S^1 with effective size of S^1ρ≡ N L_0 .In other words, the twist effectively increases the volume accessible to the system by a factor of N and lowers the “temperature”, which is a pre-requisite for the twisted Eguchi-Kawai reduction <cit.>. This property suggests that one should compare the data for twisted and periodic boundary conditions by identifying the compactification lengths as N L_0^TBC = L_0^PBC.Plotting the static correlation length as a function of ρ on the right plot on the top of Fig. <ref>, we observe a rather clear collapse of data points with different N towards a single curve, which features a peak at ρ_c/ξ_0 ≈ 9. At ρ > ρ_c we find good agreement with data for periodic BC at zero temperature. Infinite-N extrapolations using the fits (<ref>) yield only very minor corrections to this picture. Increasing the spatial lattice size to L_1 = 200 at N = 18, we do not observe a significant enhancement of the correlation length beyond statistical errors, see the right plot on Fig. <ref>. We note here that the statistical errors in the correlation length appear to be larger for the twisted case, although we have almost two times more data points in this case. Unfortunately, large statistical errors do not allow us to make a definite conclusion on whether the enhancement of correlation length at L_1 = 200 is larger for periodic or for twisted boundary conditions. We can only say that for twisted case the change in the correlation length cannot be much larger than for periodic case.As one can see from Fig. <ref>, with twisted boundary conditions the mean energy and the specific heat practically do not depend on the compactification length down to the values of L_0 which roughly correspond to the position of the peak on Fig. <ref>(b). In contrast to the case of periodic boundary conditions, at small L_0 the mean energy E(L_0) increases. The data for specific heat C(L_0) has quite large statistical errors which probably do not allow to see non-trivial behavior on lattices L_0 > 1. Nevertheless, with smallest compactification length L_0 = 1 we again observe an increase of C compared to zero temperature.If we fix L_0 and fit the N dependence of Δ E/E_0 using (<ref>), it extrapolates to zero in the large-N limit for all values of L_0 which we consider (L_0 ≥ 2). This observation supports the volume independence property at ρ > ρ_c. On Fig. <ref> we also plot the mean energy as a function of ρ. In agreement with volume independence property, for ρ > ρ_c the values of Δ E/E_0 are consistent with zero within statistical errors. However, in contrast to correlation length, for smaller values of ρ different data points for the mean energy do not collapse. This suggests that for small values of ρ long-distance quantities such as correlation length and local quantities such as mean energy might exhibit different scaling with N.To summarize, with twisted boundary conditions we also observe some signatures of a nontrivial transition between the regimes of small and large compactification lengths which is controlled by a combined parameter ρ≡ N L_0. Considered as a function of the compactification length L_0, this transition shifts to smaller and smaller L_0 as N is increased (see the left plot in the bottom of Fig. <ref>), eventually approaching the zero radius limit at N →∞ and thus effectively disappearing. A similar behavior was found for the scale of dynamical symmetry breaking in gauge theories with unbroken center symmetry <cit.>. At ρ > ρ_c physical observables practically do not depend on the lattice size, as could be expected for twisted Eguchi-Kawai reduction. The most important differences with the finite-temperature transition considered in the previous Section <ref> are, first, the independence of the height of the peak in the correlation length on N, and, second, the growth of the mean energy at small ρ, along with its nontrivial scaling with N. We will discuss these differences in more details in the concluding Section <ref>.§ NON-PERTURBATIVE SADDLE POINTSNon-perturbative saddle points of the action in the path integral are one of the cornerstones of the physical applications of resurgence theory. In particular, factorial divergences in perturbative series which characterize small field fluctuations around non-perturbative saddles cancel similar divergences in perturbative expansion around the trivial vacuum saddle, thus allowing to complete the so-called resurgent triangle. To ensure that resurgent trans-series of the twisted compactified PCM in the Ünsal-Dunne regime can be analytically continued to the low-temperature strongly coupled regime, it is also important to understand how the saddle points which dominate the path integral and enter the resurgent triangle change in the process of compactification from L_0 →∞ to L_0 ≪ 2 πΛ N^-1.To study the features of the dominant saddle points in the path integral, we select randomly the field configurations generated in Monte-Carlo simulations, and evolve them along the path of the steepest descent towards one of the saddle points in its vicinity using the Gradient Flow equations <cit.>:∂ U, τ/∂τ = -i/β N ∇^a_ SU, τT_a U, τ, U, τ = 0≡ U(),where τ is the flow time and ∇^a_ is the SUN Lie derivative with respect to U:∇^a_ fU, τ = . d/d s fe^i s T_a δ_, U, τ|_s → 0 .Here the SUN group generators T^a are Hermitian and traceless matrices normalized as T_a T_b = δ_a b. The advantage of using the Gradient Flow instead of other smoothing procedures such as smearing or cooling is that the Gradient Flow is continuous and reversible with respect to the flow time τ, therefore it can be considered as a well defined change of the variables in the partition function which preserves all physical information encoded in the initial field configuration and at the same time ensures that in terms of the flow-evolved variables U, τ the partition function is dominated by smooth configurations <cit.>.We have numerically solved equations (<ref>) using Runge–Kutta discretization scheme described in <cit.> with the time step dτ=0.1 and initial conditions U, τ=0 selected randomly from field configuration generated by Monte-Carlo process. We have continued the Gradient Flow up to the final flow time τ_f = 1.5 × 10^3.In order to characterize the features of smoothed configurations we have considered the total action S given by (<ref>), as well as the local action densityS(, τ) = = β N ∑_iN - U, τ U^†±_i, τnormalized such that it is zero for vacuum configuration with U() = I.Since on a finite lattice the continuum saddle points such as unitons and fractons <cit.> are only approximate solutions to the saddle point equations, during the Gradient Flow evolution they appear as meta-stable states which are eventually destroyed. Nevertheless, within characteristic plateaus and not very large flow times the smoothed fields are expected to properly reflect the basic properties of the continuum saddle points (such as e.g. Z_N-valued holonomies or phases, or topology in gauge theories). For instance, point-like objects should appear in the profile of the actions density as pronounced lumps of action density on the smooth background. For very large flow times and in a finite volume this correspondence between smoothed fields and continuum saddle points is lost since Gradient Flow can be considered as a diffusion process which strongly entangles all degrees of freedom and spreads them uniformly on the lattice.The fracton and uniton saddles which appear in the path integral of continuum two-dimensional PCM do not have an intrinsic topological structure due to the fact that π_2[SU(N)] = 0, in contrast to QCD and non-linear ℂℙ^N-1 model. The absence of topological charge makes lattice studies of these non-perturbative objects more difficult and leaves the local action density (<ref>) as the only scalar field which can characterize them in a simple and universal way.In order to characterize the localization of action density for saddle point solutions, we have used the inverse participation ratio (IPR):τ =V ⟨∑_S̃^2(, τ)/∑_S̃(, τ)^2⟩,S̃(, τ)=S(, τ) - min_ S(, τ),where S̃(, τ) is the action with subtracted constant background, V = L_0 L_1 is the lattice volume and averaging over smoothed field configurations at the same flow time is implied. By construction, this quantity takes the maximal value = V when the action density is localized on a single lattice site and reaches the minimal value = 1 when it is everywhere constant. A very useful property of IPR is that it scales as 1/n if there are n similar localized objects in the action density. In general, it gives the inverse fraction of the volume occupied by the support of S(, τ), thus it can serve as a measure of action density localization.To present our results for periodic boundary conditions, on Fig. <ref> we first plot a typical dependence of the total action SU, τ of smoothed field configurations U, τ on the Gradient Flow time τ, where several different lines represent independent Gradient Flows with different initial conditions U, τ=0≡ U. We observe that the total action starts from large values at initial moment of time and then rapidly decreases down to zero approximately at τ≈ 1.5 × 10^3. For some initial conditions, the decay of the action with the flow time becomes somewhat slower in the range τ = (0.4 … 0.8) × 10^3, so that sometimes a kind of “plateau” is formed.Typical profiles of the action density of the smoothed fields taken at the characteristic “plateau” time τ = 0.5 × 10^3 are presented on the Fig. <ref>(a) andFig. <ref>(c) for large and small compactification lengths L_0, accordingly. For large compactification length L_0 the action density indicates the presence of some point-like objects which manifest themselves in pronounced action lumps with the size smaller than the length of the compact direction, whereas at compactification lengths smaller than the critical length as defined by the enhancement of static correlation length the saddle points become effectively flat along the compact direction.Presumably, the particle-like objects which we observe in the large volumes can be associated with unitons, the well know unstable saddle points of the PCM. Unitons are harmonic maps S^2 → SU(N) <cit.>, where S^2 is obtained from ℝ^2 by including the point at infinity. Uniton solutions have the action which is quantized in units ofS_u = 8π/g^2 = 8 πβ Neven in the absence of any well-defined topological charge. Within characteristic plateaus during the Gradient Flow evolution, the action is clearly proportional to N (see Fig. <ref> for illustration), and agrees with 8 πβ N (8 πβ = 8.34 for our β = 0.332) within approximately 30% uncertainty. In fact, one can't expect much better agreement due to non-zero contribution of ultraviolet fluctuations and renormalization of the coupling β along the flow time. Here we do not consider the effect of this renormalization, since numerical extraction of renormalized β would be quite complicated technically and is out of the scope of this work. The unstable nature of the uniton saddles qualitatively agrees with relatively short “life-time” of the non-perturbative objects which our Gradient Flow study reveals in the path integral of the PCM.We further illustrate the dependence of the IPR (<ref>) on N and L_0 with periodic boundary conditions on Fig. <ref>(a). The Gradient Flow time is fixed to τ = 0.7 × 10^3. In the low temperature region the IPR features a rather wide peak, and takes larger values as compared to high temperature phase. The presence of the peak suggests that non-perturbative objects become more localized at these values of L_0. The maximum location strongly depends on N and the flow time: we find that for larger flow times the peak moves towards larger L_0 with the shift being stronger for smaller N, which suggests that it is not directly related to the possible “deconfinement” transition discussed in Section <ref>.Let us now turn to the Gradient Flow analysis of field configurations with twisted boundary conditions. We find that in large volume the picture is similar to the one for periodic boundary conditions. For Ünsal-Dunne regime at small L_0 we present the total action of smoothed field configurations as a function of the Gradient Flow time on Fig. <ref> where we observe an important difference: there appear a number of well separated and very stable plateaus in the dependence of the action on the flow time, with very few transitions between them. With a rather good precision the action on these plateaus appears to linearly proportional to the plateau number, S_p(n) = n S_p(n = 1), which hints at the emergence of effectively stable non-perturbative saddle points with quantized action. As for Monte-Carlo configurations the number of non-perturbative objects and hence plateaus which we observe is typically random, on Fig. <ref> we also plot some extrapolated plateau values for larger n.The emergence of the new type of non-perturbative objects for twisted boundary conditions could be expected, since the twist introduces non-trivial potential on SU(N) manifold effectively modifying it to the maximal torus U(1)^N-1 at low energies smaller that 1/(L_0 N). This potential has isolated minima with associated tunneling events between them which should appear as stable saddle points in Euclidean semi-classical description where stability is ensured by winding on the maximal torus <cit.>. These winding numbers are responsible for the emergent topological structure and the stability of non-perturbative saddles in the Ünsal-Dunne regime.If we follow the twisted Eguchi-Kawai reduction prescription and identify compactification lengths with periodic and twisted boundary conditions as L_0^PBC = N L_0^TBC≡ρ, then from Fig. <ref> we find that lowest plateau action in twisted case is approximately N times smaller than the uniton plateau action. Taking into account that uniton action is proportional to N, one can conclude that the lowest plateau actions is independent of N withS_p^TBC(N L_0) = S_p^PBC(L_0) / Nat fixed L_0. This suggests the identification of these plateaus in the action with fracton saddle points, which are expected to carry the actionS_f = S_u / N = 8 πβand non-trivial Kaluza-Klein (KK) momentum ξ_KK = -2 π k /L_0 <cit.>. Counterparts of these stable fracton saddle points are well known in twisted ℂℙ^N-1 model <cit.>: while in the large volume limit the details of boundary conditions should be irrelevant for instantons, with compactification length smaller than the size of instanton they split up to N fracton constituents which carry fractional topological charge proportional to 1/N. Solutions of ℂℙ^N-1 equation of motions can be lifted to solutions of PCM equations of motion, therefore applying this procedure to ℂℙ^N-1 instanton one yields in compactified Ünsal-Dunne regime PCM unitons fractionalized into N effectively stable constituents <cit.>.Typical action density profiles of non-perturbative objects by the Gradient Flow are given on Fig. <ref>(b) and Fig. <ref>(d) for large and small compactification length, correspondingly. As expected, we find that in the large volume non-perturbative objects are very similar to those with periodic boundary conditions. In contrast, saddles in the Ünsal-Dunne regime with twisted boundary conditions are characterized by a much larger action than in the case of periodic boundary conditions with the same L_0, and are quite strongly localized.However, we could not clearly observed the predicted fractionalization of unitons into N fractons in the Ünsal-Dunne regime. Rather, the maximal number of peaks which we have found in smoothed configurations have never exceeded ∼ 3 regardless of N. As such, however, this is not a contradiction, since the number of peaks in the action density of smoothed configurations does not necessarily coincide with the number of fractional constituents of non-perturbative saddle points. With unimproved action, these constituents typically attract each other and eventually merge during the smoothing process <cit.>, sometimes before they become visible in the background of ultraviolet fluctuations.The dependence of the IPR (<ref>) of the non-perturbative saddle points with twisted boundary conditions is illustrated on Fig. <ref>(b). The IPR exhibits a rather sharp peak structure at intermediate values of ρ = N L_0, with the peak height being approximately independent of N. In contrast to periodic case, this peak moves approximately to the position of the peak in the static correlation length and at the same time becomes smaller and narrower as flow time becomes larger. This coincidence of peaks in IPR and the static correlation length might indicate some nontrivial rearrangement of non-perturbative objects in the process of transition to the Dunne-Ünsal regime. We note, however, that even for the simplest model of the ideal gas of extended non-perturbative objects the IPR might exhibit non-monotonic behavior which is just the reflection of the competition of two scales - the lattice size and the characteristic size of non-perturbative objects. Thus while the sharp peak in the IPR on Fig. <ref>(b) might be an indication of some nontrivial transition in the structure of non-perturbative saddles, this indication should not be considered as conclusive.§ CONCLUSIONSIn this paper we have studied possible signatures of a crossover or a phase transition between the regimes of small and large compactification lengths L_0 for the two-dimensional SUN× SUN principal chiral model (PCM) both with periodic and with twisted boundary conditions. By analogy with other asymptotically free field theories one expects some kind of “deconfinement” transition for periodic boundary conditions <cit.>. According to the adiabatic continuity conjecture, the twist is expected to eliminate this transition <cit.>, so that the regimes of small and large compactification lengths can be analytically related.In the absence of well-defined local order parameters such as e.g. Polyakov loop, we have considered universal physical observables which can characterize phase transitions regardless of the symmetries of the system: mean energy, specific heat and static correlation length. We have found that for both types of boundary conditions these quantities behave in a way which is compatible with the signatures of a rather weak crossover or a phase transition: mean energy and specific heat exhibit monotonic growth/decrease with compactification length once it is sufficiently small, and the static correlation length is enhanced near some “critical” compactification length.An important difference between the two boundary conditions is that for periodic boundary conditions the peak in the correlation length becomes somewhat higher and narrower as the SUN rank N is increased. It also becomes slightly higher for larger lattice volumes. Since the large-N limit can be also considered as thermodynamic limit within the range of validity of Eguchi-Kawai reduction, this behavior suggests that PCM with periodic boundary conditions might indeed feature a finite-temperature phase transition, at least in the large-N limit.In contrast, for twisted boundary conditions the shape of the peak in the static correlation length is independent of N, once the data is considered as a function of the combined length parameter ρ = N L_0. The dependence of the peak height on the spatial lattice size also cannot be distinguished within statistical errors. This behaviour is not typical for a phase transition, but can be still compatible with a weak crossover. If true, the crossover scenario would be a challenge for the continuity conjecture, since the phases separated by the crossover typically cannot be analytically related to each other (a classical example is the Berezinskii–Kosterlitz–Thouless transition).By using the Gradient Flow, we have also studied the structure of non-perturbative saddle points which dominate the path integral of the PCM with both boundary conditions. We have found localized non-perturbative object with the properties expected for unitons, the unstable saddle points of the continuum PCM. In particular, these objects have quantized action which scales linearly with N. As expected, they also become effectively stable for twisted boundary conditions, thus exhibiting the phenomenon of emergent topology <cit.>. We also find that for twisted boundary conditions the geometric properties of non-perturbative saddles change precisely at the position of the possible crossover to the Dunne-Ünsal regime, which is yet another argument that this crossover might be non-trivial. This work was supported by the S. Kowalevskaja award from Alexander von Humboldt Foundation. We thank F. Bruckmann, A. Cherman, A. Dromard, G. Dunne, P. Orland, T. Sulejmanpasic, M. Unsal and A. Zhitnitsky for useful discussions. We are also indebted to A. Cherman, G. Dunne, T. Sulejmanpasic and M. Unsal for helpful comments on this manuscript.35 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Dunne and Unsal(2013)]Unsal:12:1 author author G. V. Dunne and author M. Unsal,@noopjournal journal Phys. Rev. Dvolume 87, pages 025015 (year 2013), http://arxiv.org/abs/1210.3646 1210.3646 NoStop [Dunne and Unsal(2012)]Dunne:12:1 author author G. V. Dunne and author M. Unsal,@noopjournal journal JHEP volume 1211, pages 170 (year 2012), http://arxiv.org/abs/1210.2423 1210.2423 NoStop [Cherman et al.(2014)Cherman, Dorigoni, Dunne, andUnsal]Unsal:14:1 author author A. Cherman, author D. Dorigoni, author G. V. Dunne,andauthor M. Unsal, @noopjournal journal Phys. Rev. Lett. volume 112, pages 021601 (year 2014), http://arxiv.org/abs/1308.0127 1308.0127 NoStop [Cherman et al.(2015)Cherman, Dorigoni, and Unsal]Cherman:14:1 author author A. Cherman, author D. Dorigoni, and author M. Unsal, 10.1007/JHEP10(2015)056 journal journal JHEP volume 1510, pages 094 (year 2015), http://arxiv.org/abs/1403.1277 1403.1277 NoStop [Dunne and Unsal(2015)]Unsal:15:1 author author G. V. Dunne and author M. Unsal,@noopjournal journal JHEP volume 1509, pages 199 (year 2015), http://arxiv.org/abs/1505.07803 1505.07803 NoStop [Eguchi and Kawai(1982)]Eguchi:82:1 author author T. Eguchi and author H. Kawai,http://link.aps.org/doi/10.1103/PhysRevLett.48.1063 journal journal Phys. Rev. Lett. volume 48, pages 1063(year 1982)NoStop [Gonzalez-Arroyo and Okawa(1983)]Okawa:83:1 author author A. Gonzalez-Arroyo and author M. Okawa, http://dx.doi.org/10.1016/0370-2693(83)90647-0 journal journal Phys.Lett.B volume 120, pages 174(year 1983)NoStop [Gonzalez-Arroyo and Okawa(2010)]Okawa:10:1 author author A. Gonzalez-Arroyo and author M. Okawa, @noopjournal journal JHEPvolume 07, pages 043 (year 2010), http://arxiv.org/abs/1005.1981 1005.1981 NoStop [Bhanot et al.(1982)Bhanot, Heller, and Neuberger]Bhanot:82:1 author author G. Bhanot, author U. M. Heller,and author H. Neuberger,http://dx.doi.org/10.1016/0370-2693(82)90106-X journal journal Phys.Lett.B volume 113,pages 47(year 1982)NoStop [Narayanan and Neuberger(2003)]Neuberger:03:1 author author R. Narayanan and author H. Neuberger, 10.1103/PhysRevLett.91.081601 journal journal Phys.Rev.Lett. volume 91, pages 081601 (year 2003), http://arxiv.org/abs/hep-lat/0303023 hep-lat/0303023 NoStop [Kiskis et al.(2003)Kiskis, Narayanan, and Neuberger]Neuberger:03:2 author author J. Kiskis, author R. Narayanan, and author H. Neuberger, 10.1016/j.physletb.2003.08.070 journal journal Phys.Lett.B volume 574, pages 65 (year 2003), http://arxiv.org/abs/hep-lat/0308033 hep-lat/0308033 NoStop [Azeyanagi et al.(2008)Azeyanagi, Hanada, Hirata, andIshikawa]Azeyanagi:08:1 author author T. Azeyanagi, author M. Hanada, author T. Hirata,and author T. Ishikawa, @noopjournal journal JHEP volume 0801, pages 025 (year 2008), http://arxiv.org/abs/arXiv:0711.1925 arXiv:0711.1925 NoStop [Unsal and Yaffe(2008)]Unsal:08:1 author author M. Unsal and author L. G. Yaffe, https://doi.org/10.1103/PhysRevD.78.065035 journal journal Phys. Rev. D volume 78, pages 065035 (year 2008), http://arxiv.org/abs/0803.0344 0803.0344 NoStop [Kovtun et al.(2007)Kovtun, Unsal, and Yaffe]Unsal:07:1 author author P. Kovtun, author M. Unsal, and author L. G. Yaffe, https://doi.org/10.1088/1126-6708/2007/06/019 journal journal JHEP volume 0706, pages 019 (year 2007), http://arxiv.org/abs/hep-th/0702021 hep-th/0702021 NoStop [Sulejmanpasic(2017)]Sulejmanpasic:16:2 author author T. Sulejmanpasic, 10.1103/PhysRevLett.118.011601 journal journal Phys.Rev.Lett. volume 118, pages 011601 (year 2017), http://arxiv.org/abs/1610.04009 1610.04009 NoStop [Orland(2011)]Orland:11:1 author author P. Orland, @noopjournal journal Phys. Rev. D volume 84, pages 105005 (year 2011), http://arxiv.org/abs/1108.0058 1108.0058 NoStop [Orland(2012)]Orland:12:1 author author P. Orland, @noopjournal journal Phys. Rev. D volume 86, pages 045023 (year 2012), http://arxiv.org/abs/1205.1763 1205.1763 NoStop [Cubero and Orland(2013)]Orland:13:1 author author A. C. Cubero and author P. Orland, @noopjournal journal Phys. Rev. D volume 88, pages 025044 (year 2013), http://arxiv.org/abs/1306.1930 1306.1930 NoStop [Orland(2014)]Orland:14:1 author author P. Orland, @noopjournal journal Phys. Rev. D volume 90, pages 125038 (year 2014), http://arxiv.org/abs/1410.2627 1410.2627 NoStop [Katzav and Orland(2016)]Orland:16:1 author author E. Katzav and author P. Orland, @noopjournal journal Phys. Rev. D volume 94, pages 065041 (year 2016), http://arxiv.org/abs/1608.00262 1608.00262 NoStop [Cubero(2015)]Cubero:15:1 author author A. C. Cubero, @noopjournal journal Phys. Rev. D volume 91, pages 105025 (year 2015), http://arxiv.org/abs/1503.06139 1503.06139 NoStop [Lüscher(2010)]Luscher:10:1 author author M. Lüscher, 10.1007/JHEP08(2010)071 journal journal JHEP volume 1008,pages 071 (year 2010), http://arxiv.org/abs/1006.4518 1006.4518 NoStop [Cabibbo and Marinari(1982)]Cabibbo:82:1 author author N. Cabibbo and author E. Marinari, http://dx.doi.org/10.1016/0370-2693(82)90696-7 journal journal Phys. Lett. B volume 119, pages 387(year 1982)NoStop [Adler(1981)]Adler:81:1 author author S. L. Adler, https://doi.org/10.1103/PhysRevD.23.2901 journal journal Phys. Rev. D volume 23, pages 2901 (year 1981)NoStop [Rossi and Vicari(1994a)]Rossi:94:1 author author P. Rossi and author E. Vicari,@noopjournal journal Phys. Rev. Dvolume 49, pages 6072 (year 1994a), http://arxiv.org/abs/hep-lat/9401029 hep-lat/9401029 NoStop [Gross and Witten(1980)]Gross:80:1 author author D. J. Gross and author E. Witten,http://link.aps.org/doi/10.1103/PhysRevD.21.446 journal journal Phys. Rev. D volume 21, pages 446(year 1980)NoStop [Rossi and Vicari(1994b)]Vicari:94:1 author author P. Rossi and author E. Vicari,@noopjournal journal Phys. Rev. Dvolume 49, pages 1621 (year 1994b), http://arxiv.org/abs/hep-lat/9307014 hep-lat/9307014 NoStop [Campostrini et al.(1995)Campostrini, Rossi, and Vicari]Rossi:94:2 author author M. Campostrini, author P. Rossi,and author E. Vicari,@noopjournal journal Phys. Rev. Dvolume 52, pages 395 (year 1995), http://arxiv.org/abs/hep-lat/9412102 hep-lat/9412102 NoStop [Buividovich and Davody(2017)]Buividovich:17:3 author author P. V. Buividovich and author A. Davody, @nooptitle Diagrammatic Monte-Carlo study of the convergent weak-coupling expansion for the large-N U(N)xU(N) principal chiral model, howpublished submitted to Phys.Rev.D (year 2017), http://arxiv.org/abs/1705.03368 1705.03368 NoStop [Lucini et al.(2005)Lucini, Teper, and Wenger]Lucini:05:1 author author B. Lucini, author M. Teper, and author U. Wenger, http://dx.doi.org/10.1088/1126-6708/2005/02/033 journal journal JHEP volume 0502, pages 033 (year 2005), http://arxiv.org/abs/hep-lat/0502003 hep-lat/0502003 NoStop [Bhanot et al.(1992)Bhanot, Lippert, Schilling, and Ueberholz]Bhanot:92:1 author author G. Bhanot, author T. Lippert, author K. Schilling,andauthor P. Ueberholz, http://dx.doi.org/10.1016/0550-3213(92)90611-E journal journal Nucl. Phys. B volume 378,pages 633(year 1992)NoStop [Unsal(2009)]Unsal:08:2 author author M. Unsal, http://dx.doi.org/10.1103/PhysRevLett.102.182002 journal journal Phys. Rev. Lett. volume 102, pages 182002 (year 2009), http://arxiv.org/abs/0807.0466 0807.0466 NoStop [Uhlenbeck(1989)]Uhlenbeck:89:1 author author K. Uhlenbeck, @noopjournal journal J. Differential Geom. volume 30, pages 1 (year 1989)NoStop [Brendel et al.(2009)Brendel, Bruckmann, Janssen, Wipf, and Wozar]Bruckmann:09:1 author author W. Brendel, author F. Bruckmann, author L. Janssen, author A. Wipf,and author C. Wozar, 10.1016/j.physletb.2009.04.055 journal journal Phys.Lett.B volume 676, pages 116 (year 2009), http://arxiv.org/abs/0902.2328 0902.2328 NoStop [Ilgenfritz et al.(2005)Ilgenfritz, Müller-Preussker, and Peschka]Peschka:05:1 author author E. Ilgenfritz, author M. Müller-Preussker,and author D. Peschka, @noopjournal journal Phys.Rev.D volume 71, pages 116003 (year 2005), http://arxiv.org/abs/hep-lat/0503020 hep-lat/0503020 NoStop
http://arxiv.org/abs/1706.08954v1
{ "authors": [ "P. V. Buividovich", "S. N. Valgushev" ], "categories": [ "hep-lat", "hep-th" ], "primary_category": "hep-lat", "published": "20170627173452", "title": "Lattice study of continuity and finite-temperature transition in two-dimensional SU(N) x SU(N) Principal Chiral Model" }
On Bose-Einstein condensation and superfluidity of trapped photons with coordinate-dependent mass and interactions Oleg L. Berman^1,2, Roman Ya. Kezerashvili^1,2,and Yurii E. Lozovik^3,4 December 30, 2023 ==================================================================================================================We study a strategic version of the multi-armed bandit problem, where each arm is an individual strategic agent and we, the principal, pull one arm each round. When pulled, the arm receives some private reward v_a and can choose an amount x_a to pass on to the principal (keeping v_a-x_a for itself). All non-pulled arms get reward 0. Each strategic arm tries to maximize its own utility over the course of T rounds. Our goal is to design an algorithm for the principal incentivizing these arms to pass on as much of their private rewards as possible.When private rewards are stochastically drawn each round (v_a^t ← D_a), we show that: * Algorithms that perform well in the classic adversarial multi-armed bandit setting necessarily perform poorly: For all algorithms that guarantee low regret in an adversarial setting, there exist distributions D_1,…,D_k and an approximate Nash equilibrium for the arms where the principal receives reward o(T). * Still, there exists an algorithm for the principal that induces a game among the arms where each arm has a dominant strategy. When each arm plays its dominant strategy, the principal sees expected reward μ'T - o(T), where μ' is the second-largest of the means [D_a]. This algorithm maintains its guarantee if the arms are non-strategic (x_a = v_a), and also if there is a mix of strategic and non-strategic arms.§ INTRODUCTION Classically, algorithms for problems in machine learning assume that their inputs are drawn either stochastically from some fixed distribution or chosen adversarially. In many contexts, these assumptions do a fine job of characterizing the possible behavior of problem inputs. Increasingly, however, these algorithms are being applied to contexts (ad auctions, search engine optimization, credit scoring, etc.) where the quantities being learned are controlled by rational agents with external incentives. To this end, it is important to understand how these algorithms behave in strategic settings.The multi-armed bandit problem is a fundamental decision problem in machine learning that models the trade-off between exploration and exploitation, and is used extensively as a building block in other machine learning algorithms (e.g. reinforcement learning). A learner (who we refer to as the principal) is a sequential decision maker who at each time step t, must decide which of k arms to `pull'. Pulling this arm bestows a reward (either adversarially or stochastically generated) to the principal, and the principal would like to maximize his overall reward. Known algorithms for this problem guarantee that the principal can do approximately as well as the best individual arm. In this paper, we consider a strategic model for the multi-armed bandit problem where each arm is an individual strategic agent and each round one arm is pulled by an agent we refer to as the principal. Each round, the pulled arm receives a private reward v ∈ [0,1] and then decides what amount x of this reward gets passed on to the principal (upon which the principal receives utility x and the arm receives utility v-x). Each arm therefore has a natural tradeoff between keeping most of its reward for itself and passing on the reward so as to be chosen more frequently. Our goal is to design mechanisms for the principal which simultaneously learn which arms are valuable while also incentivizing these arms to pass on most of their rewards. This model captures a variety of dynamic agency problems, where at each time step the principal must choose to employ one of K agents to perform actions on the principal's behalf, where the agent's cost of performing that action is unknown to the principal (for example, hiring one of K contractors to perform some work, or hiring one of K investors with external information to manage some money). In this sense, this model can be thought of as a multi-agent generalization of the principal-agent problem in contract theory (see Section <ref> for references). The model also captures, for instance, the interaction between consumers (as the principal) and many sellers deciding how steep a discount to offer the consumers - higher prices now lead to immediate revenue, but offering better discounts than your competitors will lead to future sales. In all domains, our model aims to capture settings where the principal has little domain-specific or market-specific knowledge, and can really only process the reward they get for pulling an arm and not any external factors that contributed to that reward. §.§ Our results §.§.§ Low-regret algorithms are far from strategyproof Many algorithms for the multi-armed bandit problem are designed to work in worst-case settings, where an adversary can adaptively decide the value of each arm pull. Here, algorithms such as EXP3 (<cit.>) guarantee that the principal receives almost as much as if he had only pulled the best arm. Formally, such algorithms guarantee that the principal experiences at most O(√(T)) regret over T rounds compared to any algorithm that only plays a single arm (when the adversary is oblivious).Given these worst-case guarantees, one might naively expect low-regret algorithms such as EXP3 to also perform well in our strategic variant. It is important to note, however, that single arm strategies perform dismally in this strategic setting; if the principal only ever selects one arm, the arm has no incentive to pass along any surplus to the principal. In fact, we show that the objectives of minimizing adversarial regret and performing well in this strategic variant are fundamentally at odds. Let M be a low-regret algorithm for the classic multi-armed bandit problem with adversarially chosen values. Then there exists an instance of the strategic multi-armed bandit problem and an o(T)-Nash equilibrium for the arms where a principal running M receives at most o(T) revenue.Here we assume the game is played under a tacit observational model, meaning that arms can only observe which arms get pulled by the principal, but not how much value they give to the principal. In the explicit observational model, where arms can see both which arms get pulled and how much value they pass on, even stronger results hold. Let M be a low-regret algorithm for the classic multi-armed bandit problem with adversarially chosen values. Then there exists an instance of the strategic multi-armed bandit problem in the explicit observational model along with a o(T)-Nash equilibrium for the arms where a principal running M receives zero revenue.While not immediately apparent from the above claims, these instances where low-regret algorithms fail are far from pathological; in particular, there is a problematic equilibrium for any instance where arm i receives a fixed reward v_i each round it is pulled, as long as the the gap between the largest and second-largest v_i is not too large (roughly 1/#arms). The driving cause behind both results is possible collusion between the arms (similar to collusion that occurs in the setting of repeated auctions, see <cit.>). For example, consider a simple instance of this problem with two strategic arms, where arm 1 always gets private reward 1 if pulled and arm 2 always gets private reward 0.8. In this example, we also assume the principal is using algorithm EXP3. By always reporting some value slightly larger than 0.8, arm 1 can incentivize the principal to almost always pull it in the long run. This gains arm 1 roughly 0.2 utility per round (and arm 2 nothing). On the other hand, if arm 1 and arm 2 never pass along any surplus to the principal, they will likely be played equally often, gaining arm 1 roughly 0.5 utility per round and arm 2 0.4 utility per round. To show such a market-sharing strategy works for general low-regret algorithms, much more work needs to be done. The arms must be able to enforce an even split of the principal's pulls (as soon as the principal starts lopsidedly pulling one arm more often than the others, the remaining arms can defect and start reporting their full value whenever pulled). As long as the principal guarantees good performance in the non-strategic adversarial case (achieving o(T) regret), we show that the arms can (at o(T) cost to themselves) cooperate so that they are all played equally often. §.§.§ Mechanisms for strategic arms with stochastic values We next show that, in certain settings, it is in fact possible for the principal to extract positive values from the arms per round. We consider a setting where each arm i's reward when pulled is drawn independently from some distribution D_i with mean μ_i (known to arm i but not to the principal). In this case the principal can extract the value of the second-best arm. In the below statement, we are using the term “truthful mechanism” quite loosely as shorthand for “strategy that induces a game among the arms where each arm has a dominant strategy.” Let μ' be the second largest mean amongst the set of μ_is. Then there exists a truthful mechanism for the principal that guarantees revenue at least μ'T - o(T) when arms use their dominant strategies. The mechanism in Theorem <ref> is a slight modification of the second-price auction strategy adapted to the multi-armed bandit setting. The principal begins by asking each arm i for its mean μ_i, where we incentivize arms to answer truthfully by recompensating arms according to a proper scoring rule. For the remainder of the rounds, the principal then asks the arm with the highest mean to give him the second-largest mean worth of value per round. If this arm fails to comply in any round, the principal avoids picking this arm for the remainder of the rounds. (A more detailed description of the mechanism can be seen in Mechanism <ref> in Section <ref>). In addition, we show that the performance of this mechanism is as good as possible in this setting; no mechanism can do better than the second-best arm in the worst case (Lemma <ref>).We further show how to adapt this mechanism in the setting where some arms are strategic and some arms are non-strategic (and our mechanism does not know which arms are which).Let μ_s be the second largest mean amongst the means of the strategic arms, and let μ_n be the largest mean amongst the means of the non-strategic arms. Then there exists a truthful mechanism for the principal that guarantees (with probability 1-o(1/T)) revenue at least max(μ_s, μ_n)T - o(T) when arms use their dominant strategies. A detailed description of the modified mechanism can be found in Mechanism <ref> in Section <ref>.§.§ Related work The study of classical multi-armed bandit problems was initiated by <cit.>, and has since grown into an active area of study. The most relevant results for our paper concern the existence of low-regret bandit algorithms in the adversarial setting, such as the EXP 3 algorithm (<cit.>), which achieves regret Õ(√(KT)). Other important results in the classical setting include the upper confidence bound (UCB) algorithm for stochastic bandits (<cit.>) and the work of <cit.> for Markovian bandits. For further details about multi-armed bandit problems, see the survey <cit.>.One question that arises in the strategic setting (and other adaptive settings for multi-armed bandits) is what the correct notion of regret is; standard notions of regret guarantee little, since the best overall arm may still have a small total reward. <cit.> considered the multi-armed bandit problem with an adaptive adversaryand introduced the quantity of “policy regret”, which takes the adversary's adaptiveness into account. They showed that any multi-armed bandit algorithm will get Ω(T) policy regret. This indicates that it is not enough to treat strategic behaviors as an instance of adaptively adversarial behavior; good mechanisms for the strategic multi-armed bandits problem must explicitly take advantage of the rational self-interest of the arms.Our model bears some similarities to the principal-agent problem of contract theory, where a principal employs an more informed agent to make decisions on behalf of the principal, but where the agent may have incentives misaligned from the principal's interests when it gets private savings (for example <cit.>). For more details on principal-agent problem, see the book <cit.>. Our model can be thought of as a sort of multi-armed version of the principal-agent problem, where the principal has many agents to select from (the arms) and can try to use competition between the agents to align their interests with the principal. Our negative results are closely related to results on collusions in repeated auctions. Existing theoretical work <cit.> has shown that collusive schemes exist in repeated auctions in many different settings, e.g., with/without side payments, with/without communication, with finite/infinite typespace. In some settings, efficient collusion can be achieved, i.e., bidders can collude to allocate the good to the bidders who values it the most and leave 0 asymptotically to the seller. Even without side payments and communication, <cit.> showed that tacit collusion exists and can achieve asymptotic efficiency with a large cartel. Our truthful mechanism uses a proper scoring rule <cit.> implicitly. In general, scoring rules are used to assessing the accuracy of a probabilistic prediction. In our mechanisms, we use a logarithmic scoring rule to incentivize arms to truthfully report their average rewards.Our setting is similar to settings considered in a variety of work on dynamic mechanism design, often inspired by online advertising.<cit.> considers the problem where a buyer wants to buy a stream of goods with an unknown value from two sellers, and examines Markov perfect equilibria in this model. <cit.> study truthful pay-per-click auctions where the auctioneer wishes to design a truthful mechanism that maximizes the social welfare.<cit.> consider the scenario where the principal cannot directly choose which arm to pull, and instead must incentivize a stream of strategic players to prevent them from acting myopically. <cit.> consider a setting where a seller repeatedly sells to a buyer with unknown value distribution, but the buyer is more heavily discounted than the seller. <cit.> develops a general method for finding optimal mechanisms in settings with dynamic private information. <cit.> develops an ex ante efficient mechanism for the Cost-Per-Action charging scheme in online advertising.§.§ Open Problems and Future Directions We are far from understanding the complete picture of multi-armed bandit problems in strategic settings. Many questions remain, both in our model and related models. One limitation of our negative results is that they only show there exists some `bad' approximate Nash equilibrium for the arms, i.e., one where any low-regret principal receives little revenue. This, however, says nothing about the space of all approximate Nash equilibria. Does there exist a low-regret mechanism for the principal along with an approximate Nash equilibria for the arms where the principal extracts significant utility? An affirmative answer to this question would raise hope for the possibility of a mechanism that can perform well in both the adversarial and strategic setting, whereas a negative answer would strengthen our claim that these two settings are fundamentally at odds.One limitation of our positive results is that all of the learning takes place at the beginning of the protocol, and is deferred to the arms themselves. As a result, our mechanism fails in cases where the arms' distributions can change over time. Is it possible to design good mechanisms for such settings? Ideally, any good mechanism should learn the arms' values continually throughout the T rounds, but this seems to open up the possibility of collusion between the arms.Throughout this paper, whenever we consider strategic bandits we assume their rewards are stochastically generated. Can we say anything about strategic bandits with adversarially generated rewards? The issue here seems to be defining what a strategic equilibrium is in this case - arms need some underlying priors to reason about their future expected utility. One possibility is to consider what happens when the arms all play no-regret strategies with respect to some broad class of strategies.Finally, there are other quantities one may wish to optimize instead of the utility of the principal. For example, is it possible to design an efficient principal, who almost always picks the best arm (even if the arm passes along little to the principal)? Theorem <ref> implies the answer is no if the principal also has to be efficient in the adversarial case, but are there other models where we can answer this question affirmatively?§ PRELIMINARIES §.§ Classic Multi-Armed Bandits We begin by reviewing the definition of the classic multi-armed bandits problem and associated quantities.In the classic multi-armed bandit problem a learner (the principal) chooses one of K choices (arms) per round, over T rounds. On round t, the principal receives some reward v_i,t∈ [0, 1] for pulling arm i. The values v_i,t are either drawn independently from some distribution corresponding to arm i (in the case of stochastic bandits) or adaptively chosen by an adversary (in the case of adversarial bandits). Unless otherwise specified, we will assume we are in the adversarial setting.Let I_t denote the arm pulled by the principal at round t. The revenue of an algorithm M is the random variable (M) = ∑_t=1^T v_I_t,tand the the regret of M is the random variable(M) = max_i∑_t=1^T v_i,t - (M)Mechanism M is a δ-low regret algorithm for the multi-armed bandit problem if [(M)] ≤δ. Here the expectation is taken over the randomness of M and the adversary.Mechanism M is a (ρ, δ)-low regret algorithm for the multi-armed bandit problem if with probability 1 - ρ, (M) ≤δ.There exist O(√(KTlog K))-low regret algorithms and (ρ, O(√(KTlog(K/ρ))))-low regret algorithms for the multi-armed bandit problem; see Section 3.2 of <cit.> for details.§.§ Strategic Multi-Armed Bandits The strategic multi-armed bandits problem builds upon the classic multi-armed bandits problem with the notable difference that now arms are strategic agents with the ability to withhold some payment from the principal. Instead of the principal directly receiving a reward v_i,t when choosing arm i, now arm i receives this reward and passes along some amount w_i,t to the principal, gaining the remainder v_i,t - w_i,t as utility. For simplicity, in the strategic setting, we will assume the rewards v_i,t are generated stochastically; that is, each round, v_i,t is drawn independently from a distribution D_i (where the distributions D_i are known to all arms but not to the principal). While it is possible to pose this problem in the adversarial setting (or other more general settings), this comes at the cost of there being no clear notion of strategic equilibrium for the arms.This strategic variant comes with two additional modeling assumptions. The first is the informational model of this game; what information does an arm observe when some other arm is pulled. We define two possible observational models:* Explicit: After each round t, every arm sees the arm played I_t along with the quantity w_I_t,t reported to the principal. * Tacit: After each round t, every arm only sees the arm played I_t.In both cases, only arm i knows the size of the original reward v_i,t; in particular, the principal also only sees the value w_i,t and learns nothing about the amount withheld by the arm. Collusion between arms is generally easier in the explicit observational model than in the tacit observational model.The second modeling assumption is whether to allow arms to go into debt while paying the principal. In the restricted payment model, we impose that w_i,t≤ v_i,t; an arm cannot pass along more than it receives in a given round. In the unrestricted payment model, we let w_i,t be any value in [0,1]. We prove our negative results in the restricted payment model and our positive results in the unrestricted payment model, but our proofs for our negative results work in both models (in particular, it is easier to collude and prove negative results in the unrestricted payment model).Finally, we proceed to define the set of strategic equilibria for the arms. We assume the mechanism M of the principal is fixed ahead of time and known to the K arms. If each arm i is using a (possibly adaptive) strategy S_i, then the expected utility of arm i is defined as u_i(M, S_1,…,S_K) = [ ∑_t=1^T (v_i,t - w_i,t) ·_I_t = i]. An ε-Nash equilibrium for the arms is then defined as follows. Strategies (S_1,...,S_K) form an ε-Nash equilibrium for the strategic multi-armed bandit problem if for all i ∈ [n] and any deviating strategy S_i', u_i(S_1, … ,S_i, …, S_K) ≥ u_i(S_1,…,S'_i,…, S_K) - ε.The goal of the principal is to choose a mechanism M which guarantees large revenue in any ε-Nash Equilibrium for the arms.In Section <ref>, we will construct mechanisms for the strategic multi-armed bandit problem which are truthful for the arms. We define the related terminology below.When the principal uses mechanism M, we say S_i is a dominant strategy for arm i if for any deviating strategy S_i' and any strategies for other arms S_1,..,S_i-1,S_i+1,...,S_K, u_i(M, S_1, … ,S_i, …, S_K) ≥ u_i(M, S_1,…,S'_i,…, S_K).We say that a mechanism M for the principal is truthful, if all arms have some dominant strategies.§ NEGATIVE RESULTS In this section, we show that algorithms that achieve low-regret in the multi-armed bandits problem with adversarial values perform poorly in the strategic multi-armed bandits problem. Throughout this section, we will assume we are working in the restricted payment model (i.e., arms can only pass along a value w_i,t that is at most v_i,t), but all proofs work also work in the unrestricted payment model (and in fact are much easier there).§.§ Explicit Observational Model We begin by showing that in the explicit observational model, there is an approximate equilibrium for the arms that results in the principal receiving no revenue. Since arms can view other arms' reported values, it is easy to collude in the explicit model; simply defect and pass along the full amount as soon as you observe another arm passing along a positive amount. Let mechanism M be a δ-low regret algorithm for the multi-armed bandit problem. Then in the strategic multi-armed bandit problem under the explicit observational model, there exist distributions D_i and a (δ+1)-Nash equilibrium for the arms where a principal using mechanism M receives zero revenue.Consider the two-arm setting where D_1 and D_2 are both deterministic distributions supported entirely on {1}, so that v_i,t = 1 for all i =1,2 and t ∈ [T]. Consider the following strategy S^* for arm i:* Set w_i,t = 0 if at time 1,...,t-1, the other arm always reports 0 when pulled. * Set w_i,t = 1 otherwise.We will show that (S^*,S^*) is a (δ+1)-Nash Equilibrium. It suffices to show that arm 1 can get at most δ + 1 more utility by deviating. Consider any deviating strategy S' for arm 1. By convexity, we can assume S' is deterministic (there is some best deterministic deviating strategy). Since mechanism M might be randomized, let R be the randomness used by M and define M_R to be the deterministic mechanism when M uses randomness R. Now, consider the case when arm 1 plays strategy S', arm 2 plays strategy S^* and the principal is usings mechanism M_R. * If arm 1 never reports any value larger than 0 when pulled, then S' behaves exactly the same as S^*. Therefore, u_1(M_R,S',S^*) = u_1(M_R,S^*,S^*). * If arm 1 ever reports some value larger than 0 when pulled, let τ_R be the first time it does so. We know that S' behaves the same as S^* before τ_R. Therefore,u_1(M_R,S',S^*)≤u_1(M_R,S^*,S^*) + ∑_t=τ_R^T (v_1,t - w_1,t) ·_I_t = 1≤u_1(M_R,S^*,S^*) + 1+∑_t=τ_R+1^T (max(w_1,t, w_2,t) - w_1,t) ·_I_t = 1 So in general, we haveu_1(M_R,S',S^*) ≤ u_i(M_R,S^*,S^*) + 1+∑_t=τ_R+1^T (max(w_1,t, w_2,t) - w_1,t) ·_I_t = 1.Thereforeu_1(M,S',S^*)= _R[u_1(M_R,S',S^*)] ≤ _R[u_1(M_R,S^*,S^*)] + 1 + _R[∑_t=τ_R+1^T (max(w_1,t, w_2,t) - w_1,t) ·_I_t = 1] =u_1(M,S^*,S^*)+ 1 + _R[∑_t=τ_R+1^T (max(w_1,t, w_2,t) - w_1,t) ·_I_t = 1]. Notice that this expectation is at most the regret of M in the classic multi-armed bandit setting when the adversary sets rewards equal to the values w_1,t and w_2,t passed on by the arms when they play (S', S^*). Therefore, by our low-regret guarantee on M, we have that_R [∑_t=τ_R+1^T (max(w_1,t, w_2,t) - w_1,t) ·_I_t = 1] ≤δ.Thusu_1(M,S',S^*) ≤ u_1(M,S^*,S^*) + 1 +δand this is a (1+δ)-approximate Nash equilibrium. Finally, it is easy to check that the principal receives zero revenue when both arms play according to this equilibrium strategy. §.§ Tacit Observational Model We next show that even in the tacit observational model, where the arms don't see the amounts passed on by other arms, it is still possible for the arms to collude and leave the principal with o(T) revenue. The underlying idea here is that the arms work to try to maintain an equal market share, where each of the K arms are each played approximately 1/K of the time. To ensure this happens, arms collude so that arms that aren't as likely to be pulled pass along a tiny amount ϵ to the principal, whereas arms that have been pulled a lot or are more likely to be pulled pass along 0; this ends up forcing any low-regret algorithm for the principal to choose all the arms equally often. Interestingly, unlike the collusion strategy in the explicit observational model, this collusion strategy is mechanism dependent, as arms need to estimate the probability they will be pulled in the next round. We begin by proving this result for the case of two arms, where the proof is slightly simpler. Let mechanism M be a (ρ, δ)-low regret algorithm for the multi-armed bandit problem with two arms, where ρ≤ T^-2 and δ≥√(Tlog T). Then in the strategic multi-armed bandit problem under the tacit observational model, there exist distributions D_1, D_2 and an O(√(Tδ))-Nash Equilibrium where a principal using mechanism M gets at most O(√(Tδ)) revenue. Let D_1 and D_2 be distributions with means μ_1 and μ_2 respectively, such that |μ_1-μ_2| ≤max(μ_1,μ_2)/2. Additionally, assume both D_1 and D_2 are supported on [√(δ/T), 1].We now describe the equilibrium strategy S^* (the below description is for arm 1; S^* for arm 2 is symmetric): * Set parameters B = 6√(Tδ) and θ = √(δ/T). * Define c_1,t to be the number times arm 1 is pulled in rounds 1,...,t. Similarly define c_2,t to be the number times arm 2 is pulled in rounds 1,...,t.* For t = 1,…,T: * If there exists a t' ≤ t-1 such that c_1,t' < c_2,t' - B, set w_1,t = v_1,t.* If the condition in (a) is not true, let p_1,t be the probability that the principal will pick arm 1 in this round conditioned on the history (assuming player 2 is also playing S^*), and let p_2,t = 1-p_1,t. Then: * If c_1,t-1 < c_2,t-1 and p_1,t<p_2,t, set w_1,t = θ.* Otherwise, set w_1,t = 0. We will now show that (S^*,S^*) is an O(√(Tδ))-Nash equilibrium. To do this, for any deviating strategy S', we will both lower bound u_1(M, S^*, S^*) and upper bound u_1(M, S', S^*), hence bounding the net utility of deviation.We begin by proving that u_1(M, S^*, S^*) ≥μ_2 T/2 - O(√(Tδ)). We need the following lemma. If both arms are using strategy S^*, then with probability (1-4/T), |c_1,t -c_2,t| ≤ B for all t∈[T]. Assume that both arms are playing the strategy S^* with the modification that they never defect (i.e. condition (a) in the above strategy is removed). This does not change the probability that |c_1,t - c_2,t| ≤ B for all t ∈ [T]. Define R_1,t = ∑_s=1^tw_1,s - ∑_s=1^tw_I_s, s be the regret the principal experiences for not playing only arm 1. Define R_2,t similarly. We will begin by showing that with high probability, these regrets are bounded both above and below. In particular, we will show that with probability at least 1-2/T, R_i,t lies in [-2θ√(Tlog T) - δ, δ] for all t ∈ [T] and i ∈{1, 2}.To do this, note that there are two cases where the regrets R_1,t and R_2,t can possibly change. The first is when p_1,t>p_2,t and c_1,t>c_2,t. In this case, the arms offer (w_1,t, w_2,t) = (0, θ). With probability p_1,t the principal chooses arm 1 and the regrets update to (R_1,t+1, R_2,t+1) = (R_1,t, R_2,t + θ), and with probability p_2,t the principal chooses arm 2 and the regrets update to (R_1,t+1, R_2, t+1) = (R_1, t-θ, R_2,t). It follows that [R_1,t+1+R_2,t+1|R_1,t+R_2,t] = R_1,t+R_2,t + (p_1,t-p_2,t)θ≥ R_1,t + R_2,t. In the second case, p_1,t<p_2,t and c_2,t<c_1,t, and a similar calculation shows again that [R_1,t+1+R_2,t+1|R_1,t+R_2,t] = R_1,t+R_2,t + (p_2,t-p_1,t)θ≥ R_1,t + R_2,t. It follows that R_1,t + R_2,t forms a submartingale.From the above analysis, it is also clear that |(R_1,t+1 + R_2,t+1) - (R_1,t + R_2,t)| ≤θ. It follows from Azuma's inequality that, for any fixed t ∈ [T], [R_1,t + R_2,t≤ -2θ√(Tlog T)] ≤1/T^2 Applying the union bound, with probability at least 1-1/T, R_1,t + R_2,t≥ -2θ√(Tlog T) for all t ∈ [T]. Furthermore, since the principal is using a (T^-2, δ)-low-regret algorithm, it is also true that with probability at least 1-T^-2 (for any fixed t) both R_1,t and R_2,t are at most δ. Applying the union bound again, it is true that R_1,t≤δ and R_2,t≤δ for all t with probability at least 1-1/T. Finally, combining this with the earlier inequality (and applying union bound once more), with probability at least 1 - 2/T, R_i,t∈[-2θ√(Tlog T)-δ, δ], as desired. For the remainder of the proof, condition on this being true.We next proceed to bound the probability that (for a fixed t) c_1,t - c_2,t≤ B. Define the random variable τ to be the largest value s ≤ t such that c_1,τ - c_2, τ = 0 – note that if c_1,t - c_2,t≥ 0, then c_1,s - c_2,s≥ 0 for all s in the range [τ, t]. Additionally let Δ_s denote the ± 1 random variable given by the difference (c_1,s - c_2,s)-(c_1,s-1 - c_2,s-1). We can then write c_1,t - c_2,t ≤ ∑_s = τ+1^tΔ_s≤ ∑_s=τ+1^tΔ_s·_p_1,s>p_2,s + ∑_s=τ+1^tΔ_s·_p_1,s≤ p_2,s Here the first summand corresponds to times s where one of the arms offers θ (and hence the regrets change), and the second summand corresponds to times where both arms offer 0. Note that since c_1,s≥ c_2,s in this interval, the regret R_2,s increases by θ whenever Δ_s = 1 (i.e., arm 1 is chosen), and furthermore no choice of arm can decrease R_2,s in this interval. Since we know that R_2,s lies in the interval [-2θ√(Tlog T)-δ, δ] for all s, this bounds the first sum by ∑_s=τ+1^tΔ_s·_p_1,s>p_2,s≤2δ + 2θ√(Tlog T)/θ = 2δ/θ + 2√(Tlog T) On the other hand, when p_1,s≤ p_2,s, then [Δ_s] = p_1,s-p_2,s≤ 0. By Hoeffding's inequality, it then follows that with probability at least 1 - 1/T^2,∑_s=τ+1^tΔ_s·_p_1,s≤ p_2,s≤ 2√(Tlog T) Altogether, this shows that with probability at least 1 - 1/T^2, c_1,t - c_2,t≤2δ/θ + 4√(Tlog T)≤ 6√(Tδ) = B The above inequality therefore holds for all t with probability at least 1 - 1/T. Likewise, we can show that c_2,t - c_1,t≤ B also holds for all t with probability at least 1 - 1/T. Since we are conditioned on the regrets R_i,t being bounded (which is true with probability at least 2/T), it follows that |c_1,t - c_2,t| ≤ B for all t with probability at least 1 - 4/T. By Lemma <ref>, we know that with probability 1-4/T, |c_1,t -c_2,t| ≤ B throughout the mechanism. In this case, arm 1 never uses step (a), and c_1,T≥ (T-B)/2. Thereforeu_1(M, S^*, S^*)≥ (1-4/T) · (μ_1-θ) · (T-B)/2 ≥ μ_1T/2(1 - 4/T - θ/μ_1 - B/T) = μ_1T/2 - 2μ_1 - θ T/2 - Bμ_1/2≥ μ_1T/2 - O(√(Tδ)) Now we will show that u_1(M,S',S^*) ≤μ_1T/2 + O(√(Tδ)). Without loss of generality, we can assume S' is deterministic. Let M_R be the deterministic mechanism when M's randomness is fixed to some outcome R. Consider the situation when arm 1 is using strategy S', arm 2 is using strategy S^* and the principal is using mechanism M_R. There are two cases: * c_1,t -c_2,t≤ B is true for all t∈[T]. In this case, we have u_1(M_R,S',S^*) ≤ c_1,T·μ_1 ≤μ_1(T+B)/2. * There exists some t such that c_1,t -c_2,t > B: Let τ_R+1 be the smallest t such that c_1,t -c_2,t > B. We know that c_1,τ_R -c_2,τ_R≤ B. Therefore we haveu_1(M_R,S',S^*)= ∑_t=1^T (μ_1 - w_1,t) ·_I_t = 1= ∑_t=1^T (μ_1 - w_2,t) ·_I_t = 1 + ∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1≤c_1,τ_Rμ_1 + μ_1 + (T-τ_R-1) max(μ_1-μ_2,0) + ∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1≤ μ_1(τ_R+B)/2 + μ_1 + (T-τ_R-1) (μ_1/2) +∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1≤ μ_1T/2 + μ_1(B+1)/2 + ∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1. In general, we thus have thatu_1(M_R,S',S^*) ≤μ_1T/2 + μ_1(B+1)/2 + max(0, ∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1).Thereforeu_1(M,S',S^*)= _R[u_1(M_R,S',S^*)] ≤ μ_1T/2 + μ_1(B+1)/2 + _R[ max(0, ∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1)].Notice that ∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1 is the regret of not playing arm 2 (i.e., R_2 in the proof of Lemma <ref>). Since the mechanism M is (ρ, δ) low regret, with probability 1-ρ, this sum is at most δ (and in the worst case, it is bounded above by T μ_2). We therefore have that: u_1(M,S',S^*)≤ μ_1T/2 + μ_1(B+1)/2 + δ + ρ T μ_2 ≤ μ_1T/2 + O(√(Tδ)) From this and our earlier lower bound on u_1(M, S^*, S^*), it follows that u_1(M, S',S^*) - u_1(M, S^*, S^*) ≤ O(√(Tδ)), thus establishing that (S^*, S^*) is an O(√(Tδ))-Nash equilibrium for the arms.Finally, to bound the revenue of the principal, note that if the arms both play according to S^* and |c_1,t - c_2,t| ≤ B for all t (so they do not defect), the principal gets a maximum of Tθ = O(√(Tδ)) revenue overall. Since (by Lemma <ref>) this happens with probability at least 1 - 4/T (and the total amount of revenue the principal is bounded above by T), it follows that the total expected revenue of the principal is at most O(√(Tδ)).We now extend this proof to the K arm case, where K can be as large as T^1/3/log(T).Let mechanism M be a (ρ, δ)-low regret algorithm for the multi-armed bandit problem with K arms, where K ≤ T^1/3/log(T), ρ≤ T^-2, and δ≥√(Tlog T). Then in the strategic multi-armed bandit problem under the tacit observational model, there exist distributions D_i and an O(√(KTδ))-Nash Equilibrium for the arms where the principal gets at most O(√(KTδ)) revenue.As in the previous proof, let μ_i denote the mean of the ith arm's distribution D_i. Without loss of generality, further assume that μ_1≥μ_2≥…≥μ_K. We will show that as long as μ_1-μ_2≤μ_1/K, there exists some O(√(KTδ))-Nash equilibrium for the arms where the principal gets at most O(√(KTδ)) revenue.We begin by describing the equilibrium strategy S^* for the arms. Let c_i,t denote the number of times arm i has been pulled up to time t. As before, set B = 7√(KTδ) and set θ = √(Kδ/T). The equilibrium strategy for arm i at time t is as follows:* If at any time s≤ t in the past, there exists an arm j with c_j, s - c_i, s≥ B, defect and offer your full value w_i,t = μ_i.* Compute the probability p_i,t, the probability that the principal will pull arm i conditioned on the history so far. * Offer w_i,t = θ(1-p_i,t). The remainder of the proof proceeds similarly as the proof of Theorem <ref>. The full proof can be found in Appendix <ref>. While the theorems above merely claim that a bad set of distributions for the arms exists, note that the proofs above show it is possible to collude in a wide range of instances - in particular, any set of distributions which satisfy μ_1 - μ_2 ≤μ_1/K. A natural question is whether we can extend the above results to show that it is possible to collude in any set of distributions. One issue with the collusion strategies in the above proofs is that if μ_1 - μ_2 > μ_1/K, then arm 1 will have an incentive to defect in any collusive strategy that plays all the arms evenly (arm 1 can report a bit over μ_2 per round, and make μ_1 - μ_2 every round instead of μ_1 every K rounds). One solution to this is to design a collusive strategy that plays some arms more than others in equilibrium (for example, playing arm 1 90% of the time). We show how to modify our result for two arms to achieve an arbitrary market partition and thus work over a broad set of distributions. Let mechanism M be a (ρ, δ)-low regret algorithm for the multi-armed bandit problem with two arms, where ρ≤ T^-2 and δ≥√(Tlog T). Then, in the strategic multi-armed bandit problem under the tacit observational model, for any distributions D_1, D_2 of values for the arms (supported on [√(δ/T), 1]), there exists an O(√(Tδ))-Nash Equilibrium for the arms where a principal using mechanism M gets at most O(√(Tδ)) revenue.See Appendix <ref>. Unfortunately, it as not as easy to modify the proof of Theorem <ref> to prove the same result for K arms. It is an interesting open question whether there exist collusive strategies for K arms that can achieve an arbitrary partition of the market. § POSITIVE RESULTS In this section we will show that, in contrast to the previous results on collusion, there exists a mechanism for the principal that can obtain Θ(T) revenue from the arms. This mechanism essentially incentivizes each arm to report the mean of its distribution and then runs a second-price auction, asking the arm with the highest mean for the second-highest mean each round. By slightly modifying this mechanism, we can obtain a mechanism that works for a combination of strategic and non-strategic arms.Throughout this section we will assume we are working in the tacit observational model and the unrestricted payment model. §.§ All Strategic Arms with Stochastic ValuesWe begin by considering the case when all arms are strategic. Define μ_i as the mean of distribution D_i for i=1,…,K and u = -log( min_i:μ_i ≠ 0μ_i ) +1. We assume throughout that u = o(T/K).We will first show that the dominant strategy of each arm in this mechanism includes truthfully reporting their mean at the beginning, and then then compute the principal's revenue under this dominant strategy.The following strategy is the dominant strategy for arm i in Mechanism <ref>: * (line 1 of Mechanism <ref>) Report the mean value μ_i of D_i the first time when arm i is played.* (lines 3,4 of Mechanism <ref>) If i = i^*, for the R rounds that the principal expects to see reported value w', report the value w'. For the bonus round, report 0. If i ≠ i^*, report 0. * (line 5 of Mechanism <ref>) For all other rounds, report 0.Note that the mechanism is naturally divided into three parts (in the same way the strategy above is divided into three parts): (1) the start, where each arm is played once and reports its mean, (2) the middle, where the principal plays the best arm and extracts the second-best arm's value (and plays each other arm once), and (3) the end, where the principal plays each arm some number of times, effectively paying them off for responding truthfully in step (1). To show the above strategy is dominant, we will proceed by backwards induction, showing that each part of the strategy is the best conditioned on an arbitrary history. We start with step (3). It is easy to check that these rounds don't affect how many times the arm is played or not. It follows that it is strictly dominant to just report 0 (and receive your full value for the turn). Note that the reward the arm receives in expectation for this round is (u + log(w_i))μ_i; we will use this later. For step (2), assume that i=i^*; otherwise, arm i is played only once, and the dominant strategy is to report 0 and receive expected reward μ_i. Depending on what happened in step (1), there are two cases; either w' ≤μ_i, or w' > μ_i. We will show that if w' ≤μ_i, the arm should play w' for the next R rounds (not defecting) and report 0 for the bonus round. If w' > μ_i, the arm should play 0 (defecting immediately).Note that we can recast step (2) as follows: arm i starts by receiving a reward from his distribution D_i. For the next R turns, he can pay w' for the privilege of drawing a new reward from his distribution (ending the game immediately if he refuses to pay). If w' ≤μ_i, then paying for a reward w' is positive in expectation, whereas if w' > μ_i, then paying for a reward is negative in expectation. It follows that the dominant strategy is to continue to report w' if w' ≤μ_i (receiving a total expected reward of R(μ_i-w') + μ_i) and to immediately defect and report 0 if w' > μ_i (receiving a total expected reward of μ_i).Finally, we analyze step (1). We will show that, regardless of the values reported by the other players, it is a dominant strategy for arm i to report its true mean μ_i. If arm i reports w_i, and i ≠ i^*, then arm i will receive in expectation reward G = (μ_i - w_i) + μ_i + max(u + log(w_i), 0)μ_i If u + log(w_i) > 0, then this is maximized when w_i = μ_i and G= (u + log(μ_i) + 1)μ_i (note that by our construction of u, u + log(μ_i) ≥ 1). On the other hand, if u + log(w_i) ≤ 0, then this is maximized when w_i = 0 and G = 2μ_i. Since u+log(μ_i)+1 ≥ 2, the overall maximum occurs at w_i = μ_i. Similarly, when arm i reports w_i and i = i^*, then arm i receives in expectation reward G' = (μ_i-w_i) + min(0, R(μ_i-w')) + μ_i + max(u + log(w_i), 0)μ_i which is similarly maximized at w_i = μ_i. Finally, it follows that if μ_i≤ w', G=G', so it is dominant to report w_i = μ_i. On the other hand, if μ_i > w', then reporting w_i = μ_i will ensure i = i^* and so once again it is dominant to report w_i = μ_i. Under Mechanism <ref>, the principal will receive revenue at least μ'T - o(T) when arms use their dominant strategies, where μ' is the second largest mean in the set of means μ_i. For any constant α > 0, no truthful mechanism can guarantee (αμ + (1-α) μ')T revenue in the worst case. Here μ is the largest value among μ_1,...,μ_K. And μ' is the second largest value among μ_1,...,μ_K.Suppose there exists an truthful mechanism A guarantees (αμ + (1-α) μ')T revenue for any distributions. We will show this results in a contradiction. We now consider L>exp(1/α) inputs. The i-th input has μ = b_i = 1/2 + i/(2L) and μ' = 1/2. Among these inputs, one arm (call it arm j^*) is always the arm with largest mean and another arm is always the arm with the second largest mean. Other arms have the same input distribution in all the inputs. Consider all the arms are using their dominant strategies. For the i-th input, let x_i T be the expected number of pulls by A on the arm k^* and p_i T be the expected amount arm k^* gives to the principal. Because the mechanism is truthful, in the i-th distribution, arm k^* prefers its dominant strategy than the dominant strategy it uses in some j-th distribution (i ≠ j). In other words, we have for i ≠ j, b_i x_i -p_i ≥ b_i x_j -p_j.We also have, for all i,b_i x_i -p_i ≥ 0.By using these inequalities , we get for all i,p_i ≤ b_i x_i + ∑_j=1^i-1 x_j (b_j+1 - b_j).On the other hand, A's revenue in the i-th distribution is at most (p_i + (1-x_i) μ') T. Therefore we have, for all i,p_i + (1- x_i) μ' ≥α· b_i + (1-α) μ'.So we get(1- x_i) μ' + b_i x_i + ∑_j=1^i-1 x_j (b_j+1 - b_j) ≥α· b_i + (1-α) μ'.It can be simplified asx_i ≥α + ∑_j=1^i-1 x_j b_j+1 -b_j/b_i-μ' =α + 1/i·∑_j=1^i-1 x_j.By induction we get for all i, x_i ≥α∑_j=1^i 1/i > αln(i).Therefore we havex_L > αln(L) ≥ 1.Here we get a contradiction.The above algorithm relies on the assumption that arms know their own means μ_i. However, if the arms don't initially know their means, we can instead insert a phase at the beginning that lasts T^2/3 rounds where we pull each arm T^2/3/K times and expect no reward to be passed on. This allows the arms to estimate their rewards, and the following phases can be appropriately adjusted to maintain a solution in o(T)-dominant strategies, losing an additional O(T^2/3) in revenue for the principal, but maintaining the revenue guarantee of μ_2 T - o(T). It is an interesting question whether a more clever stochastic bandit algorithm can be embedded without destroying dominant strategies, and also whether a solution exists in exact dominant strategies for this model. §.§ Strategic and Non-strategic Arms with Stochastic ValuesWe now consider the case when some arms are strategic and other arms are non-strategic. Importantly, the principal does not know which arms are strategic and which are non-strategic. We define μ_i as the mean of distribution D_i for i=1,...,K. Set B = T^2/3, M=8T^-1/3ln(KT) and u = -log( min_i:μ_i ≠ 0μ_i ) +1+M. We assume u = o(T/BK).The following strategy is the dominant strategy for arm i in Mechanism <ref>: * (line 1 of Mechanism <ref>) For the first B rounds, report a total sum of (μ_i +M) B.* (lines 3,4 of Mechanism <ref>) If i = i^*, for the R rounds that the principal expects to see reported value w', report the value w'-M. For the B bonus rounds, report 0. If i ≠ i^*, report 0. * (line 5 of Mechanism <ref>) For all other rounds, report 0.Similarly as the proof of Lemma <ref>, the mechanism is divided into three parts: (1) the start, where each arm is played B times and reports its mean, (2) the middle, where the principal plays the best arm and extracts the second-best arm's value (and plays each other arm B times), and (3) the end, where the principal plays each arm some number of times, effectively paying them off for responding truthfully in step (1). To show the above strategy is dominant, we will proceed by backwards induction, showing that each part of the strategy is the best conditioned on an arbitrary history. For step (3), similarly as the proof of Lemma <ref>, it is strictly dominant for the arm to report 0. The reward the arm receives in expectation for this step is (u + log(w̅_i-M))μ_iB.For step (2), assume that i=i^*; otherwise, arm i is played B times, and the dominant strategy is to report 0 and receive expected reward μ_iB. Depending on what happened in step (1), there are two cases; either w' -M≤μ_i, or w' -M> μ_i. Similarly as the proof of Lemma <ref>, we know that if w' -M≤μ_i, the arm should play w'-M for the next R rounds (not defecting) and report 0 for B bonus rounds. If w' -M> μ_i, the arm should play 0 (defecting immediately).For step (1), similar as the proof of Lemma <ref>, the expected reward of arm i is either G = (μ_i -w̅_i)B + Bμ_i + max(u + log(w̅_i-M), 0)Bμ_iorG' = min(0, R(μ_i-w'+M)) + (μ_i -w̅_i)B + Bμ_i + max(u + log(w̅_i-M), 0)Bμ_iUsing the same argument as the proof of Lemma <ref>, we know arm i's dominant strategy is to make w̅_i = μ_i + M.If all the strategic arms use their dominant strategies in Lemma <ref>, then the principal will get at least max(u_s,u_n)T - o(T) with probability 1-o(1/T). Here u_s is the second largest mean of the strategic arms and u_n is the largest mean of the non-strategic arms. We prove that with high probability non-strategic arms' reported values don't deviate too much from their means. For each non-strategic arm i,by Chernoff bound,[|w̅_i - μ_i| ≥ M/ 2] ≤ 2exp(-(M/2)^2 B/2) ≤ 1/(KT)^8By union bound, with probability 1-o(1/T), all non-strategic arm i satisfy |w̅_i - μ_i|≤ M/ 2. From now on, we will assume we are in the case when |w̅_i - μ_i|< M/ 2, for all i such that arm i is a non-strategic arm. There are two cases: * Case 1: arm i^* is a strategic arm. Then its easy to see that w' ≥ u_s+M and w' ≥ u_n - M/2. And also μ_i^* = w_i^* - M ≥ w' - M. So only from the third step of Mechanism <ref>, the principal will get at least(w' -M)R = max(u_s, u_n - 3M/2) R ≥max(u_s,u_n)R- 3MR/2 = max(u_s,u_n)T - max(u_s,u_n)(u+3)BK -3MR/2 = max(u_s,u_n)T - o(T). * Case 2: arm i^* is a non-strategic arm. We know that μ_i^*≥ w_i^* - M/2 ≥ (w'-M) + M/2. So by using Chernoff bound and union bound again, we know that arm i^* will be stopped in the third with probability o(1/T). We also know that μ_i^*≥ w_i^* - M/2 ≥ u_s+M-M/2 and μ_i^*≥ w_i^*-M/2 ≥ u_n- M/2-M/2. Using the same argument as Case 1, we know that only from the third step, the principal will get at least max(u_s,u_n)T - o(T). *AcknowledgementsM.B. supported in part by an NSF CAREER award (CCF-1149888), NSF CCF-1215990, NSF CCF-1525342, NSF CCF-1412958, a Packard Fellowship in Science and Engineering, and the Simons Collaboration on Algorithms and Geometry.Research completed in part while S.W. was a Research Fellow at the Simons Institute for the Theory of Computing.alpha § OMITTED PROOFSLet D_1 and D_2 be distributions with means μ_1 and μ_2 respectively, and both distributions supported on [√(δ/T), 1].We now describe the equilibrium strategy S^* (the below description is for arm 1; S^* for arm 2 is symmetric): * Set parameters B = 6√(Tδ) /μ_2 and θ = √(δ/T). * Define c_1,t to be the number times arm 1 is pulled in rounds 1,...,t. Similarly define c_2,t to be the number times arm 2 is pulled in rounds 1,...,t.* For t= 1,...,T. * If there exists a t' ≤ t-1 such that c_1,t' /μ_1 < c_2,t'/μ_2 - B, set w_1,t = v_1,t.* If the condition in (a) is not true, let p_1,t be the probability that the principal will pick arm 1 in this round conditioned on the history (assuming player 2 is also playing S^*), and let p_2,t = 1-p_1,t. Then: * If c_1,t-1/μ_1 < c_2,t-1/μ_2 and p_1,t/μ_1<p_2,t/μ_2, set w_1,t = θ.* Otherwise, set w_1,t = 0. We will now show that (S^*,S^*) is an O(√(Tδ))-Nash equilibrium. To do this, for any deviating strategy S', we will both lower bound u_1(M, S^*, S^*) and upper bound u_1(M, S', S^*), hence bounding the net utility of deviation.We begin by proving that u_1(M, S^*, S^*) ≥μ_1^2 T/μ_1+μ_2 - O(√(Tδ)). We need the following lemma. If both arms are using strategy S^*, then with probability (1-4/T), |c_1,t/μ_1 -c_2,t/μ_2| ≤ B for all t∈[T]. Assume that both arms are playing the strategy S^* with the modification that they never defect (i.e. condition (a) in the above strategy is removed). This does not change the probability that |c_1,t /μ_1- c_2,t/μ_2| ≤ B for all t ∈ [T]. Define R_1,t = ∑_s=1^tw_1,s - ∑_s=1^tw_I_s, s be the regret the principal experiences for not playing only arm 1. Define R_2,t similarly. We will begin by showing that with high probability, these regrets are bounded both above and below. In particular, we will show that with probability at least 1-2/T, R_i,t lies in [-μ_1/μ_2(2θ√(Tlog T)+δ), δ] for all t ∈ [T] and i ∈{1, 2}.To do this, note that there are two cases where the regrets R_1,t and R_2,t can possibly change. The first is when p_1,t/μ_1>p_2,t/μ_2 and c_1,t/μ_1>c_2,t/μ_2. In this case, the arms offer (w_1,t, w_2,t) = (0, θ). With probability p_1,t the principal chooses arm 1 and the regrets update to (R_1,t+1, R_2,t+1) = (R_1,t, R_2,t + θ), and with probability p_2,t the principal chooses arm 2 and the regrets update to (R_1,t+1, R_2, t+1) = (R_1, t-θ, R_2,t). It follows that [R_1,t+1/μ_2+R_2,t+1/μ_1|R_1,t/μ_2+R_2,t/μ_1] = R_1,t/μ_2+R_2,t/μ_1 + (p_1,t/μ_1-p_2,t/μ_2)θ≥ R_1,t/μ_2 + R_2,t/μ_1. In the second case, p_1,t/μ_1<p_2,t/μ_2 and c_2,t/μ_1<c_1,t/μ_2, and a similar calculation shows again that [R_1,t+1/μ_2+R_2,t+1/μ_1|R_1,t/μ_2+R_2,t/μ_1] = R_1,t/μ_2+R_2,t/μ_1 + (p_2,t/μ_2-p_1,t/μ_1)θ≥ R_1,t + R_2,t. It follows that R_1,t/μ_2 + R_2,t/μ_1 forms a submartingale.From the above analysis, it is also clear that |(R_1,t+1/μ_2 + R_2,t+1/μ_1) - (R_1,t/μ_2 + R_2,t/μ_1)| ≤θ/μ_2. It follows from Azuma's inequality that, for any fixed t ∈ [T], [R_1,t /μ_2+ R_2,t/μ_1 ≤ -2θ/μ_2√(Tlog T)] ≤1/T^2 Applying the union bound, with probability at least 1-1/T, R_1,t /μ_2+ R_2,t/μ_1 ≥ -2θ/μ_2√(Tlog T) for all t∈[T]. Furthermore, since the principal is using a (T^-2, δ)-low-regret algorithm, it is also true that with probability at least 1-T^-2 (for any fixed t) both R_1,t and R_2,t are at most δ. Applying the union bound again, it is true that R_1,t≤δ and R_2,t≤δ for all t with probability at least 1-1/T. Finally, combining this with the earlier inequality (and applying union bound once more), with probability at least 1 - 2/T, R_i,t∈[-μ_1/μ_2(2θ√(Tlog T)+δ), δ], as desired. For the remainder of the proof, condition on this being true.We next proceed to bound the probability that (for a fixed t) c_1,t/μ_1 - c_2,t/μ_2 ≤ B. Define the random variable τ-1 to be the largest value s ≤ t such that c_1,τ/μ_1 - c_2, τ/μ_2 ≤ 0 – note that if c_1,t/μ_1 - c_2,t/μ_2 ≥ 0, then c_1,s /μ_1- c_2,s/μ_2 ≥ 0 for all s in the range [τ, t]. Additionally let Δ_s denote the ± 1 random variable given by the difference (c_1,s /μ_1- c_2,s/μ_2)-(c_1,s-1/μ_1 - c_2,s-1/μ_2). We can then write c_1,t /μ_1- c_2,t /μ_2 ≤ ∑_s = τ+1^tΔ_s≤ ∑_s=τ+1^tΔ_s·_p_1,s/μ_1>p_2,s/μ_2 + ∑_s=τ+1^tΔ_s·_p_1,s/μ_1≤ p_2,s/μ_2 Here the first summand corresponds to times s where one of the arms offers θ (and hence the regrets change), and the second summand corresponds to times where both arms offer 0. Note that since c_1,s/μ_1 ≥ c_2,s/μ_2 in this interval, the regret R_2,s increases by θ whenever Δ_s = 1/μ_1 (i.e., arm 1 is chosen), and furthermore no choice of arm can decrease R_2,s in this interval. Since we know that R_2,s lies in the interval [-μ_1/μ_2(2θ√(Tlog T)+δ), δ] for all s, this bounds the first sum by ∑_s=τ+1^tΔ_s·_p_1,s>p_2,s≤δ +μ_1/μ_2(2θ√(Tlog T)+δ)/θ· (1/μ_1) =1/μ_2( 2δ/θ + 2√(Tlog T)) On the other hand, when p_1,s/μ_1 ≤ p_2,s/μ_2, then [Δ_s] = p_1,s/μ_1-p_2,s/μ_2≤ 0. By Hoeffding's inequality, it then follows that with probability at least 1 - 1/T^2,∑_s=τ+1^tΔ_s·_p_1,s≤ p_2,s≤2/μ_2√(Tlog T) Altogether, this shows that with probability at least 1 - 1/T^2, c_1,t - c_2,t≤1/μ_2(2δ/θ + 4√(Tlog T)) ≤ 6√(Tδ) /μ_2= B The above inequality therefore holds for all t with probability at least 1 - 1/T. Likewise, we can show that c_2,t /μ_2- c_1,t /μ_1≤ B also holds for all t with probability at least 1 - 1/T. Since we are conditioned on the regrets R_i,t being bounded (which is true with probability at least 2/T), it follows that |c_1,t/μ_1 - c_2,t/μ_2| ≤ B for all t with probability at least 1 - 4/T.By Lemma <ref>, we know that with probability 1-4/T, |c_1,t/μ_1 -c_2,t/μ_2| ≤ B throughout the mechanism. In this case, arm 1 never uses step (a), and c_1,T≥μ_1/μ_1+μ_2T - μ_1μ_2/μ_1+μ_2B. Thereforeu_1(M, S^*, S^*)≥ (1-4/T) · (μ_1-θ) ·(μ_1/μ_1+μ_2T - μ_1μ_2/μ_1+μ_2B)≥ μ_1^2T/μ_1+μ_2 - O(√(Tδ)) Now we will show that u_1(M,S',S^*) ≤μ_1^2T/μ_1+μ_2+ O(√(Tδ)). Without loss of generality, we can assume S' is deterministic. Let M_R be the deterministic mechanism when M's randomness is fixed to some outcome R. Consider the situation when arm 1 is using strategy S', arm 2 is using strategy S^* and the principal is using mechanism M_R. There are two cases: * c_1,t/μ_1 -c_2,t/μ_2 ≤ B is true for all t∈[T]. In this case, we haveu_1(M_R,S',S^*) ≤ c_1,T·μ_1 ≤μ_1/μ_1+μ_2T + μ_1μ_2/μ_1+μ_2B.* There exists some t such that c_1,t/μ_1 -c_2,t/μ_2 > B: Let τ_R+1 be the smallest t such that c_1,t/μ_1 -c_2,t/μ_2 > B. We know that c_1,τ_R/μ_1 -c_2,τ_R /μ_2≤ B. Therefore we haveu_1(M_R,S',S^*)= ∑_t=1^T (μ_1 - w_1,t) ·_I_t = 1= ∑_t=1^T (μ_1 - w_2,t) ·_I_t = 1 + ∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1≤c_1,τ_Rμ_1 + μ_1 + (T-τ_R-1) max(μ_1-μ_2,0) + ∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1≤ μ_1( μ_1/μ_1+μ_2τ_R + μ_1μ_2/μ_1+μ_2B) + μ_1 + (T-τ_R-1) q^2_1/μ_1+μ_2+∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1≤ μ_1^2/μ_1+μ_2T + μ_1μ_2/μ_1+μ_2B+μ_1+ ∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1. In general, we thus have thatu_1(M_R,S',S^*) ≤μ_1^2/μ_1+μ_2T + μ_1μ_2/μ_1+μ_2B+μ_1 + max(0, ∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1).Thereforeu_1(M,S',S^*)= _R[u_1(M_R,S',S^*)] ≤ μ_1^2/μ_1+μ_2T + μ_1μ_2/μ_1+μ_2B+μ_1 + _R[ max(0, ∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1)].Notice that ∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1 is the regret of not playing arm 2 (i.e., R_2 in the proof of Lemma <ref>). Since the mechanism M is (ρ, δ) low regret, with probability 1-ρ, this sum is at most δ (and in the worst case, it is bounded above by T μ_2). We therefore have that: u_1(M,S',S^*)≤ μ_1^2/μ_1+μ_2T + μ_1μ_2/μ_1+μ_2B+μ_1+ δ + ρ T μ_2 ≤ μ_1^2/μ_1+μ_2T + O(√(Tδ)) From this and our earlier lower bound on u_1(M, S^*, S^*), it follows that u_1(M, S',S^*) - u_1(M, S^*, S^*) ≤ O(√(Tδ)), thus establishing that (S^*, S^*) is an O(√(Tδ))-Nash equilibrium for the arms.Finally, to bound the revenue of the principal, note that if the arms both play according to S^* and |c_1,t/μ_1 - c_2,t/μ_2| ≤ B for all t (so they do not defect), the principal gets a maximum of Tθ = O(√(Tδ)) revenue overall. Since (by Lemma <ref>) this happens with probability at least 1 - 4/T (and the total amount of revenue the principal is bounded above by T), it follows that the total expected revenue of the principal is at most O(√(Tδ)).As in the proof of Theorem <ref>, let μ_i denote the mean value of the ith arm's distribution D_i (supported on [√(Kδ/T), 1]). Without loss of generality, further assume that μ_1≥μ_2≥…≥μ_K. We will show that as long as μ_1-μ_2≤μ_1/K, there exists some O(√(KTδ))-Nash equilibrium for the arms where the principal gets at most O(√(KTδ)) revenue.We begin by describing the equilibrium strategy S^* for the arms. Let c_i,t denote the number of times arm i has been pulled up to time t. As before, set B = 7√(KTδ) and set θ = √(Kδ/T). The equilibrium strategy for arm i at time t is as follows:* If at any time s≤ t in the past, there exists an arm j with c_j, s - c_i, s≥ B, defect and offer your full value w_i,t = μ_i.* Compute the probability p_i,t, the probability that the principal will pull arm i conditioned on the history so far. * Offer w_i,t = θ(1-p_i,t). We begin, as before, by showing that if all parties follow this strategy, then with high probability no one will ever defect. If all arms are using strategy S^*, then with probability (1-3/T), |c_i,t -c_j,t| ≤ B for all t∈[T], i,j∈[K].As before, assume that all arms are playing the strategy S^* with the modification that they never defect. This does not change the probability that |c_i,t - c_j,t| ≤ B for all t∈[T], i,j∈[K]. Define R_i,t = ∑_s=1^tw_i,s - ∑_s=1^tw_I_s, s be the regret the principal experiences for not playing only arm i up until time t. We begin by showing that with probability at least 1-2/T, R_i,t lies in [-Kθ√(Tlog T) - (K-1)δ, δ] for all t ∈ [T] and i ∈[K].To do this, first note that since the principal is using a (T^-2, δ)-low-regret algorithm, with probability at least 1-T^-2 the regrets R_i,t are all upper bounded by δ at any fixed time t. Via the union bound, it follows that R_i,t≤δ for all i and t with probability at least 1-1/T.To lower bound R_i,t, we will first show that ∑_i=1^KR_i,t is a submartingale in t. Note that, with probability p_j,t, R_i, t+1 will equal R_i, t + θ((1-p_j,t)-(1-p_i,t)). We then have [∑_i=1^KR_i,t+1|∑_i=1^KR_i, t]= ∑_i=1^K R_i,t + ∑_i=1^Kp_i,t∑_j=1^Kθ((1-p_j,t)-(1-p_i,t)) = ∑_i=1^K R_i,t + ∑_i=1^Kp_i,t∑_j=1^Kθ(p_i,t - p_j,t) = ∑_i=1^K R_i,t + θ∑_i=1^Kp_i,t(Kp_i,t - 1) = ∑_i=1^K R_i,t + θ(K∑_i=1^Kp_i,t^2 - ∑_i=1^Kp_i,t) ≥ ∑_i=1^K R_i,t where the last inequality follows by Cauchy-Schwartz. It follows that ∑_i=1^KR_i,t forms a submartingale.Moreover, note that (since |p_i-p_j| ≤ 1) |R_i,t+1 - R_i,t| ≤θ. It follows that |∑_i=1^KR_i,t+1 - ∑_i=1^KR_i,t| ≤ Kθ and therefore by Azuma's inequality that, for any fixed t ∈ [T], [∑_i=1^KR_i,t≤ -2Kθ√(Tlog T)] ≤1/T^2. With probability 1-1/T, this holds for all t ∈ [T]. Since (with probability 1-1/T) R_i, t≤δ, this implies that with probability 1-2/T, R_i,t∈[-2Kθ√(Tlog T) - (K-1)δ, δ]. We next proceed to bound the probability that c_i,t - c_j,t > B for a i, j, and t. Define S^(i,j)_t = (c_i,t - c_j,t + 1/θ(R_i,t - R_j,t)). We claim that S^(i,j)_t is a martingale. To see this, we first claim that R_i,t+1 - R_j,t+1 = R_i, t - R_j,t - θ(p_i,t - p_j,t). Note that, if arm k is pulled, then R_i,t+1 = R_i,t + θ((1-p_i,t) - (1-p_k,t)) = R_i,t + θ(p_k,t - p_i,t) and similarly, R_j,t+1 = R_j,t + θ(p_k,t - p_j,t). It follows that R_i,t+1 - R_j,t+1 = R_i, t - R_j,t - θ(p_i,t - p_j,t).Secondly, note that (for any arm k) [c_k,t+1-c_k,t | p_t] = p_k,t, and thus [c_i,t+1 - c_j, t+1 - (c_i,t - c_j,t)| p_t] = p_i,t - p_j,t. It follows that [S^(i,j)_t+1-S^(i,j)_t|p_t]= [(c_i,t+1 - c_j,t+1)-(c_i,t -c_j,t) | p_t]+ 1/θ[(R_i,t+1-R_j,t+1)-(R_i,t-R_j,t) | p_t] =(p_i,t - p_j,t) - (p_i,t - p_j,t) =0 and thus that [S^(i,j)_t+1|S^(i,j)_t] = S^(i,j)_t, and thus that S^(i,j)_t is a martingale. Finally, note that |S^(i,j)_t+1 - S^(i,j)_t| ≤ 2, so by Azuma's inequality Pr[S^(i,j)_t≥ 4√(Tlog(TK))] ≤ (TK)^-2 Taking the union bound, we find that with probability at least 1- 1/T, S^(i,j)≤ 4√(Tlog(TK)) for all i, j, and t. Finally, since with probability at least 1-2/T each R_i,t lies in [-2Kθ√(Tlog T) - (K-1)δ, δ], with probability at least 1-3/T we have that (for all i, j, and t) c_i,t - c_j,t =S^(i,j)_t - 1/θ(R_i,t - R_j,t) ≤4√(Tlog(TK)) + 1/θ|R_i,t - R_j,t| ≤4√(Tlog(TK)) + 2K√(Tlog T) + Kδ/θ≤ 7Kδ/θ=7K√(Tδ)=BBy Lemma <ref>, we know that with probability 1-3/T, |c_i,t -c_j,t| ≤ B for all t∈[T], i,j∈[K]. In this case, arm 1 never defect, and c_1,T≥ T/K-B. Thereforeu_1(M, S^*, S^*)≥ (1-3/T) · (μ_1-θ) · (T/K-B) ≥ μ_1T/K(1 - 3/T - θ/μ_1 - BK/T) = μ_1T/K - 3μ_1/K - θ T/K - Bμ_1 ≥ μ_1T/K - O(√(KTδ)) Now we are going to show that u_1(M,S',S^*) ≤μ_1T/K + O(√(KTδ)). Without loss of generality, we can assume S' is deterministic. Let M_R be the deterministic mechanism when M's randomness is fixed to some outcome R. Consider the situation when arm 1 is using strategy S', arm 2 is using strategy S^* and the principal is using mechanism M_R. There are two cases: * c_i,t -c_j,t≤ B is true for all t∈[T] and i,j ∈ [K]. In this case, we have u_1(M_R,S',S^*) ≤ c_1,T·μ_1 ≤μ_1(T+(K-1)B)/K. * There exists some t∈[T] and i,j∈[K] such that c_i,t -c_j,t > B: Let τ_R+1 be the smallest t such that c_i,t -c_j,t > B for some i,j∈[K]. We know that c_1,τ_R -c_i,τ_R≤ B for all i ∈ [K]. Therefore we haveu_1(M_R,S',S^*)= ∑_t=1^T (μ_1 - w_1,t) ·_I_t = 1= ∑_t=1^T (μ_1 - w_2,t) ·_I_t = 1 + ∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1≤c_1,τ_Rμ_1 + μ_1 + (T-τ_R-1) max(μ_1-μ_2,0) + ∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1≤ μ_1(τ_R+B)/K + μ_1 + (T-τ_R-1) (μ_1/K) +∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1≤ μ_1T/K + μ_1(B+1)(K-1)/K + ∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1.In M_R, we also have∑_t=1^T (w_2,t - w_1,t) ·_I_t = 1 = ∑_t=1^T (w_2,t - w_I_t,t ) - ∑_t=1^T (w_2,t - w_I_t,t )·_I_t ≠ 1≤ ∑_t=1^T (w_2,t - w_I_t,t )+ ∑_t=1^τ_R w_I_t,t·_I_t ≠ 1 - ∑_t=τ_R+1^T (μ_2-μ_I_t) ·_I_t ≠ 1≤ ∑_t=1^T (w_2,t - w_I_t,t ) + T(θ+B/T) + 0. In general, we thus have thatu_1(M_R,S',S^*) ≤μ_1T/K + μ_1(B+1)(K-1)/K+ max(0, ∑_t=1^T (w_2,t - w_I_t,t ) + Tθ+B).Thereforeu_1(M,S',S^*)= _R[u_1(M_R,S',S^*)] ≤ μ_1T/K + μ_1(B+1)(K-1)/K + _R[ max(0, ∑_t=1^T (w_2,t - w_I_t,t ) + Tθ+B)].Notice that ∑_t=1^T (w_2,t - w_I_t,t ) is the regret of not playing arm 2. Since the mechanism M is (ρ, δ) low regret, with probability 1-ρ, this sum is at most δ (and in the worst case, it is bounded above by T μ_2). We therefore have that: u_1(M,S',S^*)≤ μ_1T/K + μ_1(B+1)(K-1)/K+ δ + ρ T μ_ + Tθ + B ≤ μ_1T/K + O(√(KTδ)). From this and our earlier lower bound on u_1(M, S^*, S^*), it follows that u_1(M, S',S^*) - u_1(M, S^*, S^*) ≤ O(√(KTδ)), thus establishing that (S^*, S^*) is an O(√(KTδ))-Nash equilibrium for the arms.Finally, to bound the revenue of the principal, note that if the arms both play according to S^* and |c_i,t - c_j,t| ≤ B for all t∈[T],i,j∈[K] (so they do not defect), the principal gets a maximum of Tθ = O(√(KTδ)) revenue overall. Since (by Lemma <ref>) this happens with probability at least 1 - 3/T (and the total amount of revenue the principal is bounded above by T), it follows that the total expected revenue of the principal is at most O(√(KTδ)).
http://arxiv.org/abs/1706.09060v1
{ "authors": [ "Mark Braverman", "Jieming Mao", "Jon Schneider", "S. Matthew Weinberg" ], "categories": [ "cs.GT", "stat.ML" ], "primary_category": "cs.GT", "published": "20170627215800", "title": "Multi-armed Bandit Problems with Strategic Arms" }
Max Planck Institute for Extraterrestrial Physics, Gießenbachstraßse 1, 85748 Garching bei München, Germany European Southern Observatory, Karl-Schwarzschild-Str. 2, D-85748, Garching bei München, Germany Astrophysics Research Institute, Liverpool John Moores University, Liverpool, L3 5RF, UK Department of Astronomy, University of Florida, Gainesville, FL, 32611, USA Department of Physics, University of Florida, Gainesville, FL, 32611, USA INAF-Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, I-50125 Firenze, Italy School of Physics and Astronomy, Queen Mary University of London, Mile End Road, London E1 4NS, UKInfrared dark clouds represent the earliest stages of high-mass star formation. Detailed observations of their physical conditions on all physical scales are required to improve our understanding of their role in fueling star formation. We investigate the large-scale structure of the IRDC G035.39–00.33, probing the dense gas with the classical ammonia thermometer. This allows us to put reliable constraints on the temperature of the extended, pc-scale dense gas reservoir and to probe the magnitude of its non-thermal motions. Available far-infrared observations can be used in tandem with the observed ammonia emission to estimate the total gas mass contained in G035.39–00.33.We identify a main velocity component as a prominent filament, manifested as an ammonia emission intensity ridge spanning more than 6 pc, consistent with the previous studies on the Northern part of the cloud. A number of additional line-of-sight components are found, and a large scale, linear velocity gradient of ∼0.2 is found along the ridge of the IRDC. In contrast to the dust temperature map, an ammonia-derived kinetic temperature map, presented for the entirety of the cloud, reveals local temperature enhancements towards the massive protostellar cores. We show that without properly accounting for the line of sight contamination, the dust temperature is 2-3 K larger than the gas temperature measured with .While both the large scale kinematics and temperature structure are consistent with that of starless dark filaments, the kinetic gas temperature profile on smaller scales is suggestive of tracing the heating mechanism coincident with the locations of massive protostellar cores. Temperature structure and kinematics of the IRDC G035.39–00.33 Vlas Sokolov1 Ke Wang2 Jaime E. Pineda1 Paola Caselli1 Jonathan D. Henshaw3 Jonathan C. Tan4,5 Francesco Fontani6 Izaskun Jiménez-Serra7 Wanggi Lim4December 30, 2023 ===========================================================================================================================================================================§ INTRODUCTIONMassive (M_⋆> 8 M_⊙) stars dominate their environments through powerful stellar winds, ionizing radiation, and their decisive role in driving the turbulence and enriching the chemical complexity of the interstellar medium. Despite the importance massive stars play in their host galaxies, understanding the earliest phases of their formation is still an ongoing effort <cit.>.The majority of massive stars are not formed in isolation. A large fraction of all stars are born within Giant Molecular Clouds (GMCs) <cit.>, massive (> 10^4 M_⊙), often filamentary, molecular structures that span across dozens of parsecs and are thought to be responsible for the bulk of Galactic star formation. GMCs have been found to be highly sub-structured, and with the advance of mid- and far-infrared imaging instruments, the ubiquity of filamentary structure in star-forming molecular clouds became apparent <cit.>. As the filaments assemble their mass, the densest filaments in star-forming clouds are thought to become gravitationally unstable, fragmenting further into protostellar cores.Often tracing the highest density regions of GMCs <cit.>, Infrared Dark Clouds (IRDCs) have been used as testing grounds for the earliest stages of massive star and cluster formation theories since the late 1990's. First identified as dark features obscuring the bright Galactic background <cit.>, IRDCs soon became recognized as the most promising candidates for harboring the long-sought initial conditions of high-mass star forming regions <cit.>. Subsequent far-infrared, submillimetre, and radio band observations revealed a high degree of fragmentation in these clouds, with dense and massive cores exhibiting a variety of star formation stages: from prestellar, dark, cold, and quiescent cores to active, infrared-bright and chemically rich substructures with embedded sources driving outflows and regions <cit.>.G035.39–00.33 (hereafter G035.39) is a cold <cit.>, massive <cit.> IRDC located 2.9 kpc away in the W48 molecular complex <cit.>. Its highly filamentary structure appears as an extinction feature up to 70, and the cloud harbors a number of dense cores <cit.>.Previous single-dish radio and far-infrared studies of G035.39 describing the large, pc-scale, gas reservoir suggest that the bulk of the cloud material in the IRDC represents the typical chemical properties of cold and dense gas, namely high CO depletion <cit.> and high values of deuteration <cit.>. Furthermore, the dust temperature maps in <cit.>, derived from photometric maps, show a monotonic decrease in dust temperatures from the edges of the IRDC to its innermost regions, where most of the massive protostellar cores are, with no apparent heating signatures of the embedded protostars in G035.39.<cit.> observed a widespread, pc-scale emission as a mixture of broad and narrow components, finding it consistent with being a remnant of a large-scale shock, possibly associated with the IRDC formation process. Follow up observations revealed the northern part of the cloud to possess complex, multicomponent kinematics, with the velocity components interacting dynamically with the massive cores <cit.>. Alternatively, the origin of the emission across the cloud could be attributed to outflow activities of undetected embedded protostars <cit.>. Indeed, <cit.> find 70sources in 13 massive dense cores (20-50 M_, MDCs) in G035.39, which indicates that these cores are potentially forming high-mass stars (white diamonds on Fig. <ref>). While the sources are likely to be responsible for the observed broad component emission in G035.39, attributing its narrow component to embedded protostars would require existence of an undetected population of low-mass protostars across the cloud.By comparing the C^18O line emission with the mass surface density map obtained from extinction mapping, <cit.> concluded that the denser part of the cloud is consistent with being in virial equilibrium.Follow-up observations of G035.39 with the Plateau de Bure interferometer (PdBI) have resolved the kinematics of the cloud into distinct sub-virial, velocity coherent structures that hint at their dynamical interaction with an embedded protocluster <cit.>. The cores in the continuum substructure appear to be intertwined in a network of independent filamentary structures <cit.> and are likely to collapse without additional support from magnetic fields ( , see also).Despite the wealth of observations collected for this IRDC, its gas temperature structure has never been mapped before. This paper intends to establish a coherent picture of the physical conditions of dense gas across the whole extent of the IRDC using observations of two ammonia inversion lines obtained with a high spectral resolution. Figure <ref> shows the infrared extinction morphology of G035.39 and marks the portion of the cloud studied by <cit.>. The overall field of view of the figure shows the extent of observations that will be presented in this study. For comparison to the previous body of work, we will refer to the region north of δ(J2000) = +20845, approximately corresponding to the extent of previous IRAM 30m and PdBI studies, as G035.39-N.We discuss observations conducted and available data used in <ref>. Dust temperature, gas column density, cloud kinematics, and ammonia abundance are presented in <ref>. We compare the gas and dust temperatures, and discuss the stability of the cloud, in <ref>. We summarize our findings in <ref>.§ OBSERVATIONS§.§ GBT observations Robert C. Byrd Green Bank Telescope (GBT) observations of G035.39 were carried out on 2010 Feb 27th and 28th (project GBT/10A-067, PI: Ke Wang). G035.39 was covered in Nyquist sampled on-the-fly (OTF) maps scanning along RA and Dec. The K-band receiver was equipped with two beams (B3 and B4) separated by 178.8 arcsec in azimuth. We used the frequency switching mode, with ±2.5 MHz switch for the two signal states, in a 12.5 MHz band with a channel width of 3.05 kHz in dual polarization mode. The band covered (1,1) and (2,2) inversion lines, CCS (2-1) rotational transition, and NH_2D (4_14-4_04), the latter being undetected. The weather conditions at the GBT site were stable during the observing run, with pointing accuracy resulting from winds alone estimated to be below 3. The system temperatures were at 38-45 K during the first day and between 58-78 K during the second day.To estimate the absolute flux of the observed emission, we have performed nodding observations of a quasar 3C48. For each beam, polarization, and spectral window states, the data were then reduced in GBTIDL v2.10.1 to yield main beam temperatures for each state. Using a spectral flux density model from <cit.>, we derived the predicted flux for 3C48. When compared to reduced nodding observations, the model is consistent within 10% for ammonia spectral windows. The off-source beam B4, however, was found to be mismatched by 50-90% in integrated intensity when compared to the on-source beam, and was consequently removed from the following discussion. As the area mapped by B4 was mainly off the bulk of G035.39, the decision to drop the beam from a subsequent discussion does not affect the results of this study.The OTF observations of G035.39 were calibrated via the GBT pipeline[<http://gbt-pipeline.readthedocs.io/en/latest/>] <cit.> into main beam temperature units and imaged with the GBT Ammonia Survey (GAS, Friesen, Pineda et al. ) pipeline[<http://gas.readthedocs.io/en/latest/>], modified to address spectral setup differences. The full width at half maximum of the GBT beam at (1,1) line frequency (32) was used to set the pixel size for all spectral line cubes, at three pixels per beam. The final spectral resolution of the data is 38.6 . §.§ Herschel public dataAs mentioned in <ref>, the entire extent of G035.39 has been studied before with the Space Observatory <cit.> by <cit.>. To quantify the difference in gas- and dust-derived properties, we use the available G035.39 data from the infrared Galactic Plane Survey <cit.>.The data products used in this study, photometric maps from PACS <cit.> and SPIRE <cit.> cameras, were downloaded from the image server of the first public Hi-GAL data release <cit.>, observation id's 1342219631 and 1342219630. The DR1 data underwent processing by the ROMAGAL pipeline <cit.>, and in particular, had its absolute levels of emission calibrated. This allows us to directly put constraints on the properties of dust emission.§ RESULTS §.§ Overview of the dataThe integrated intensity maps for ammonia and CCS transitions observed with the GBT are presented in Fig. <ref>. Not accounting for the bright satellite lines, the majority of the molecular emission detected from the cloud is situated between 43 and 46 . A prominent secondary component is present in the southern region of the cloud and, furthermore, strong line asymmetries indicate the presence of additional line-of-sight components (see Fig. <ref> for example spectra).Given the low signal-to-noise ratio of the CCS (2-1) line, its kinematics can not be easily constrained, and the transition is not discussed throughout this work. We note that the southern peak of the integrated intensity of the CCS coincides with a peak of the mid-infrared extinction map (Fig. <ref>c), and has no associated 24- and 70point sources. As the carbon-chain molecules are known to trace regions of early-stage chemistry <cit.>, the CCS peak may be indicative of dense gas in an earlier evolutionary stage than the rest of the cores in the IRDC.More dedicated studies will be carried out toward this region in the future. As the large-scale structure of the cloud's main velocity component manifests itself as a prominent, continuous filament, we can probe how its physical properties vary along its length. We set the line segment defining the filament from the ammonia intensity profile. The maximal value of the (1,1) integrated intensity along right ascension is calculated for each value of declination. The longest continuous segment is then taken from the obtained set of coordinate points, with a continuity condition that the neighboring points must be in the same GBT beam. This approach, adequately simplistic for our task of probing pc-scale structure, recovers the intensity ridge of G035.39 without the need to resort to more sophisticated ridge detection methods. The resulting filament profile is marked on Fig. <ref>.In the far-infrared photometric maps, G035.39 manifests itself as a typical infrared dark cloud, characterized by its emission at longer wavelengths. It appears as an extinction feature in the 70band of PACS, and as an emission feature from 160onwards. The central panel on Fig. <ref> displays a color-composite image of the IRDC, with 350, 250, and 160emission used for red, green, and blue channels, respectively. As both ammonia and dust continuum trace the dense gas that constitutes the bulk of the cloud, the far-infrared morphology does not show any significant deviations from the -defined filament ridge. A clearly visible substructure in PACS 160and SPIRE 250maps can be attributed to those bands having angular resolutions (12 and 18, respectively) different from our observations (32). The properties derived from the maps were then regridded onto the grid given by our GBT observations. §.§ Ammonia line fittingThe ammonia molecule has been proven to be an invaluable tool in probing physical conditions of moderately dense molecular regions <cit.>, and the ratio of its collisionally populated metastable (J,K) = (1,1) and (2,2) inversion states can be used to derive the rotational gas temperature <cit.>. This temperature is commonly used as a proxy value for the kinetic temperature of the medium, derived through balancing the rates of radiative and collisional transitions <cit.>.In the Rayleigh–Jeans approximation, an observed spectral profile of the two inversion lines along with their hyperfine components can be described as a sum over their hyperfines, assuming uniform excitation conditions for all the hyperfine components of the lines and a unity beam filling factor <cit.>:T_MB(v) = ∑_i{1 - exp[-r_i τ_0 exp(-(-v_i)^2/2σ_v^2)]}( - ),where describes the population ratios of the inversion transition parity levels, the background radiation temperature is taken to be that of the cosmic microwave background (2.7312 K), r_i are normalized relative intensities of the hyperfine components, v_i is the velocity centroids of the hyperfine components, σ_v is the velocity dispersion, and τ_0 the main group opacity of the corresponding inversion line. The latter can be used to derive the column density of the (1,1) rotational state N_(1,1) <cit.>:N(1,1) = 8 π k ν_0^2/h c^31/A_1,1√(2π)σ_ν (T_ex-T_bg)τ_0,and can then be scaled to a total ammonia column density via a partition function, assuming that only the metastable population levels are populated <cit.>. The spectral profile modelling and line fitting are done following the method presented inFriesen, Pineda et al. , where the (1,1) and (2,2) spectra are simultaneously modelled using Eq. <ref> within the<cit.> Python package. The spectral profile of both inversion lines are fitted considering the following free parameters: kinetic temperature , excitation temperature , ammonia column density , velocity offset with respect to line rest frequency , and velocity dispersion .As mentioned in <ref>, multiple line-of-sight components are present in the ammonia spectra. Their interchangeable strength and unknown a priori locations make conventional line-fitting techniques difficult to apply. In particular, the iterative nonlinear least-squares algorithm <cit.> used inis prone to failures to reach global convergence in presence of multiple local minima. To ensure convergence of the algorithm on the global minima, we perform parameter space griddingto select initial guesses for the fitting routine. As the parameters that vary the most among our spectra are the line centroids and their peak amplitudes, we search for the optimal starting conditions by varying and .By inspecting the (1,1) spectra for line peaks and line asymmetries, we have set the velocity intervals at which the line centroids could reside. The six intervals selected are centered at 42.9, 44.6, 45.5, 47.0, 55.2, and 72.2 . The velocity components that were found to overlap spatially were each split into 10 values, corresponding to a velocity steps of ≤ 0.1 . Additionally, we consider a range of 10 values set up to cover the amplitude range of our (1, 1) data. The resulting velocity and line brightness ranges for components pairs that were found to overlap were then permuted together to yield over 50k modeled spectra. Each modeled spectral profile was subsequently checked against all the pixels in our data, and the models with lowest total squared residuals were used as a starting point for the nonlinear least squares routine. The source code for the initial guess selection method described above is freely available online[<https://github.com/vlas-sokolov/multicube>].For every pixel of the spectral cube obtained from <ref> we perform a multiple component nonlinear least-squares fit for one and two velocity components. To decide on a number of components present in a spectrum, one needs to rule out overfitting. As a direct minimization of the square of residuals would always prefer a more complex model, we limit the multiple component fits, requiring the best-fit solution to have a signal-to-noise ratio of at least three in all components and for those components to have the peak separation larger than the line widths of the two components. Should any of these criteria not be met, a simpler model is preferred - one velocity component for a failed double peak fit, or a spectrum is masked altogether if a single peak fit fails. §.§ Parsec-scale kinematics of the IRDCThe structure of G035.39 can be seen as a combination of a main velocity component and additional velocity components along the line of sight. We refer the coherent structure at 44-47 as the main velocity component. For spectra with two line components identified, components that are brighter than their counterparts are considered to belong to the main group. In G035.39, this choice results into a spatially coherent velocity field traced by the main component.Figure <ref> illustrates the kinematics of the main velocity component, overlaid with additional velocity components identified. Of these additional components, the ones found in G035.39-N (the 44 and 46 components) coincide with the network of filaments from <cit.>. Two more velocity components are identified in both ammonia transitions from the spectral cube inspection, but at 55 and 72 are unlikely to be related to the main body of the IRDC. Additionally, a strong (up to peak T_MB = 1.5 K) velocity component, well-separated from the main cloud component by ∼2 , is present at the location of the active star formation in the southern part of the IRDC. This is consistent with the location and of Filament 1 from <cit.> at its southernmost point.The velocity of the main ammonia component gradually changes from red- to blue-shifted in the southward direction. Figure <ref> illustrates this change, showing the velocity centroid and velocity dispersion profiles along the IRDC intensity ridge. The non-thermal velocity dispersions of ammonia <cit.> have a large dynamic range, sometimes going as high as 1.2 at the edges of the map, but generally staying within the interquartile range between 0.38 and 0.52 (Fig. <ref>b). These high values, implying non-thermal motions dominating the line width, are above those found in low-mass cores and are in the upper range of typical non-thermal components of massive cores <cit.>. For the H_2 sound speed derived from the fitted ammonia kinetic temperatures and assuming a mean mass per particle of 2.33 u, the average Mach number across the IRDC is ℳ = 2.14, consistent with previous studies of G035.39 that find gas motions in the cloud to be supersonic <cit.>.The gradual change of the line centroid towards the southern portion of the IRDC can be quantified in terms of a velocity gradient. Previous studies of G035.39 have discussed the global gas motions on various scales. <cit.> attribute radial velocity irregularity towards the northern part of G035.39-N found by <cit.> to the unresolved substructure, and find the global gradients in identified filaments to be smaller than 0.7 . <cit.> find global, north-south velocity gradients of ∼0.4-0.8 along three CO filaments in G035.39. As our GBT data covers the full extent of the cloud, we are able to constrain the global velocity gradient along the whole IRDC. A least squares fit to the ridge velocity profile, weighed by uncertainties in centroid velocities, results in a line-of-sight velocity gradient of ∇ v ∼ 0.2 along the filament's 6 pc length (Fig. <ref>a). This value is in good agreement with the average velocity gradients reported for larger scale filament and GMC structures <cit.>.While the global velocity gradient following the ridge of G035.39 is well described on scales larger than one parsec, smaller scale, oscillatory-like deviations from the fitted linear relation are present. Additionally, hints of a localized third component, towards the southern 42.6 feature, are manifested as a broader velocity dispersion at the edge of G035.39 (see Fig. <ref> for an example fit with irregular residual). As it is mostly blended with the main component, constraining its properties is difficult due to the limited angular resolution of the GBT data. Ammonia emission is often highly substructured in star-forming regions, and can manifest itself as filamentary emission down to ∼5000 AU scales <cit.>. The exact nature of the velocity substructure in the southern part of G035.39 can be seen as either stemming from the coherent gas motions around the dense cores <cit.>, or as a picture reminiscent of the nearby low-mass star forming Taurus complex, where previously unresolved velocity-coherent filaments are found to be bundled together in a larger structure <cit.>. The velocity irregularities along the cloud may result from the sub-pc substructure emission (similar to the already resolved one in the G035.39-N region by <cit.>), being smoothed by the GBT beam. The higher angular resolution analysis of the gas kinematics along the entire IRDC will be addressed in a future study. §.§ Gas temperatureWhile the profile of the (1, 1) line can be constrained accurately in the pixels with significant emission, the (2, 2) line is considerably weaker in the low temperature regime. Because of this, we impose a more stringent constraint on the physical parameters that depend primarily on the (2, 2) inversion transition, namely kinetic temperature and total ammonia column density. For these, we only select spectra that have > 3 σ_I detection in the integrated intensity. We take the expression for integrated intensity uncertainty, σ_I = √(N)σ_T_MBΔ v_c, from <cit.>, where N is the number of channels with nonzero emission in a modeled spectrum (arbitrarily taken to be 10^-5 K), σ_T_MB is the corresponding T_MB uncertainty, and Δ v_c is the channel width in the GBT spectra.The gas kinetic temperature of the main component of G035.39 is presented in Figure <ref>. The derived values vary across the body of the dark cloud from about 11 K to 15 K. The cold gas temperature range is typical of other IRDCs <cit.> as well as of relatively nearby low-mass starless cores <cit.>.Due to a more stringent masking of the temperatures derived from our multi-component fit, only a handful of spectra have their kinetic temperature constrained for weaker secondary components. As most of the derived values of the gas kinetic temperature belong to the main component, the discussion on the temperature of the weaker components is thus effectively restricted to those few detections. In the star-forming part of G035.39-N, towards the northern group of the star-forming cores, we successfully measure the gas temperature for two components along the line of sight to be between 12.9 and 13.3 K.The region coincident with a bright infrared source south-east of the main filament (55 cut-out on Fig. <ref>) is consistently hotter than the bulk of the IRDC. The derived temperatures for the gas associated with the source range from 17.5 to 20.9 K, suggesting that the gas is internally heated. <cit.> identify the source as an infrared-bright protostellar MDC, and derive its dust temperature to be 26 ± 6 K. As the ammonia emission for this component is peaked at 55 , it appears to be physically unrelated to G035.39. Similarly, for a spectral line detected at 72 (Fig. <ref>), which appears as a starless infrared extinction feature east of the filament, we report a kinetic temperature of 16.5 ± 1.3 K. Although defined as part of the IRDC extent in <cit.> based on the common column density contour, we suggest that the feature does not form a coherent structure with G035.39. §.§ Dust temperatureThe intensity of an optically thin source of temperature T is given by the radiative transfer equation <cit.>, which can be approximated byI_ν = B_ν(T)×(1 - e^-τ_ν) ≈ B_ν(T) τ_ν,where B_ν(T) is the Planck blackbody function. In the expression above, a frequency dependent opacity τ_ν can be shown to be related to the gas column density N_H_2 in the following way:τ_ν = ∫κ_νρds = N_H_2μ_H_2m_Hκ_ν,where μ_H_2 = 2.8 is the molecular weight per hydrogen molecule <cit.>, κ_ν is the dust opacity, and m_H is the mass of a hydrogen atom. For consistency with <cit.>, we have adopted the dust opacity law of κ_ν∝ν^β, assuming a gas-to-dust mass ratio of 100 from <cit.>. As the optically thin approximation in Eq. <ref> does not necessarily hold for PACS bands at high column densities, we discuss its limitations for the PACS data. While the 160PACS band is expected to be within the optically thin approximation well into the high density regime (τ_160 = 0.16 for N_H_2 = 10^23 cm^-2), the 70maps may trace extinction of dust rather than its emission (G035.39 appears as extinction feature in 70), rendering the optically thin dust emission assumption invalid <cit.>.Hi-GAL PACS and SPIRE maps containing G035.39 were jointly convolved to a common resolution of 34.5, corresponding to the beam size of the 500SPIRE band. The maps were then regridded to 11.5pixel size to form spectral cubes. The resulting cube was fit pixel-by-pixel with a single temperature gray-body model described above, assuming a fixed value of β = 2 <cit.>. Fixing the value of β reduces the number of free parameters for an otherwise degenerate model <cit.>. While implicit assumptions on the spectral opacity index are made in such a way, it enables an estimation of the resulting parameter uncertainties by minimizing the errors stemming from the model degeneracy. In the analysis above, we have considered 30% uncertainty on the flux levels for consistency with <cit.>.Column density and effective dust temperature maps, derived for the Herschel regions from 160, 250, 350 and 500Herschel bands, are presented in Figure <ref>. The column density and dust temperature values show anti-correlation, typical for externally irradiated clouds ( , see also), and IRDCs <cit.>. The overall morphology of the density and temperature structure in the derived parameter maps is similar to that of <cit.>. §.§ Molecular abundancesThe column densities from and those of ammonia correlate weakly (r=0.64, Fig. <ref>), and show no evidence for ammonia depletion, consistent with previous work toward nearby starless cores <cit.>. By taking the ratio of the two we derive the ammonia abundance in G035.39,X(NH_3) = [N(NH_3)/N(H_2)].The mean value of the total abundance is ∼2.1 × 10^-8, corresponding to the para-abundances of ∼1.0 × 10^-8 assuming the ortho- to para-ratio of one. The range of abundances is comparable to that found towards the centers of low-mass cores (; ; Friesen, Pineda et al. ), and is in good agreement to values measured towards the low-mass star-forming cluster Serpens South <cit.>. When compared to other IRDCs, our GBT data on G035.39 point to lower abundances - <cit.> derive abundances of a few times 10^-7. However,observe their IRDC sample with a much smaller, VLA, beam sizes, so allowing for a small beam filling factor of our observations may mitigate the discrepancy. Indeed, for angular resolution of 40and a median sample distance of 2.9 kpc, <cit.> report X(NH_3) to be between 7 × 10^-9 and 10^-7, consistent with our findings.§ DISCUSSION §.§ A comparison of dust and gas temperaturesCurrent understanding of heating and cooling processes within dense molecular clouds requires close coupling between gas and dust at number densities above ∼10^5 cm^-3 <cit.>. However, no correlation of dust and gas temperatures is found in G035.39 (Pearson's r = -0.16), despite the uncertainties on the temperatures being significantly smaller than the temperature dynamical range. Moreover, the dust temperatures derived from are consistently higher than the ammonia based gas temperatures, suggesting that the two sets of observations are sensitive to different environments. Dust temperatures higher than the gas temperatures have been reported before. <cit.> find dust temperatures to be 2-3 K warmer than the gas temperatures towards starless cores in the Pipe nebula. More recently, <cit.> report their ammonia-derived kinetic gas temperatures to be consistently lower than their counterparts. The difference in the two temperature tracers can be attributed to the line-of-sight mixing of warm foreground dust emission into the far-infrared SED, thus raising the effective line-of-sight average dust temperature. Ammonia gas temperature, on the other hand, is expected to trace the dense inner region of the filament only.While the dust temperature map does not show any signs of local enhancements in the vicinity of dense cores, our results point to local temperature increases of about 1 K toward the sites of active star formation. The magnitude of the effect is comparable to what <cit.> find towards Perseus, where protostellar cores have been found to have ammonia-derived kinetic temperatures 1.3 K larger than the starless cores. To further investigate the possibility of dense gas heated by embedded protostars, we split the pixels in two groups, those within one GBT beam from the massive dense cores in <cit.> and those outside of it. A number of cores with mass lower than 20 M_⊙ were identified by <cit.> in addition to MDCs. As the total luminosity output of those cores is, on average, half of that of the MDCs, we restrict the spatial temperature distribution analysis to distance from the MDCs only. We find the median values of the gas kinetic temperatures to be 13.1 K inside the one beam radii, and 12.5 K outside of them. On the contrary, the median dust temperature increases from 15.9 to more than 18 K away from the massive dense cores (Fig. <ref>). In order to visualize the opposite trends in gas and dust temperatures, we plot their kernel density estimations as a function of distance to the closest massive core (Fig. <ref>a). The contours on Figure <ref> represent the levels of the density function based on the scatter of the temperature points only, as the fitting uncertainties in both ammonia- and dust-based temperatures are much smaller than the dynamic range of the trends in the figure. The mean temperature errors and their standard deviations are σ_T_kin = 0.38 ± 0.15 K and σ_T_dust = 0.36 ± 0.09 K for gas and dust temperatures, respectively.Systematically different values of dust and gas temperatures stress the importance of considering the line of sight mixing of the IRDC component with the warm dust emission. A number of methods for estimating the background component in photometric data is available <cit.>[<https://github.com/esoPanda/FTbg>]. To address the line of sight contamination effects on the dust temperatures, we model the line of sight contamination of dust emission via two approaches: * A Galactic Gaussian method <cit.>, assuming that the Galactic contribution follows a Gaussian profile along the latitude direction within -1≤ b ≤ +1 latitude interval;* A Small Median Filter method (SMF), which interpolates the background contribution from the region outside a predefined IRDC boundary <cit.> and estimates the 160 to 500foreground by utilizing SED model of diffuse ISM <cit.> normalized to the available 24extinction data <cit.>.A detailed description of each method is presented in <cit.>. The GG- and SMF-processed maps were used to derive the corrected column density and dust temperature maps following <ref>.We find that both subtraction methods result in lower dust temperatures in G035.39, with the average values for the GG and SMF dust temperatures being 12.4 ± 1.0 and 13.8 ± 0.9 K, respectively. These values, compared to the mean gas kinetic temperature T_kin = 12.9 ± 0.8 K, suggest that the corrections applied to the maps shifted the peak of the dust SEDs into the temperature regime that much better reflects the actual gas temperature. Figure <ref>b shows the same distance-temperature relation as Fig. <ref>a, but for the GG method, a method that matches the ammonia temperature trend the closest. Despite the two trends showing some degree of agreement, no significant correlation is found between the two temperatures (Pearson's r=0.26). §.§ Stability of the filament The support of a filament against the gravitational collapse is often discussed in terms of its mass to length ratio, or a line mass (M/L). We estimate the line mass from the Hi-GAL H_2 column density map on the pixels that have ammonia detection. The total mass can be estimated as a sum over the column density pixels as follows:M = μ_H_2 m_H D^2 ∫N_H_2 dΩ,where D = 2.9 kpc is a kinematic distance to G035.39.The mass estimate above is representing a sum over all optically thin dust emission, including both the physical region traced by the GBT ammonia observations and a contribution along the line of sight. To subtract line of sight contamination, we we make a simplistic assumption that the total gas column of this extra LoS material is equal to a mean N_H_2 value along the bounding contour around the (2, 2) detection of the main velocity component. We apply a correction offset of 2.64 × 10^22 cm^-2 to the G035.39 gas column density map. By summing over the column densities within the bounding contour, we find the mass of G035.39 to be 1218 , corresponding to (M/L) ≈ 223 . We argue that this value, limited both by the sensitivity-limited (2, 2) detection and the column density correction above, should resemble a lower limit to the line mass of the filament. Similarly derived (M/L) value for the (1, 1) detection yields the line mass of 319 , while the mass to length ratio of the filament without the envelope subtraction is 634 . Similarly, if the GG and SMF-derived densities are used to calculate the line mass as above, we obtain (M/L) values of 635 and 494 , respectively, for slightly different values of T_dust. The representative range of line mass values, 223-635 , is an order of magnitude higher than the critical line mass of thermally supported filament at 15 K, highlighting the importance of other means of support, such as non-thermal motions or magnetic fields.Given our ammonia fitting results, we can calculate what critical line mass would be needed for the filament to be in equilibrium. By adapting the conventional filamentary virial analysis <cit.> to include both thermal and non-thermal support <cit.> one can estimate the critical line mass to be(M/L)_crit = 2 σ_tot^2/G,where σ_tot^2 = σ_nt^2 + c_s^2 is a quadrature sum of the non-thermal velocity dispersion σ_nt and the isothermal sound speed c_s. The range of non-thermal motions representative of the filament points to the critical line masses from 50 to 200 , consistently lower than the -derived (M/L) range. Similarly, this critical line mass value regime is lower than the (M/L)_crit values derived from the CO emission <cit.>. This apparent disparity may result from the bulk of ammonia emission being more sensitive to the inner filament material of G035.39 than the CO data, as the former is known to be relatively enhanced in the denser regions of starless cores <cit.>, where the CO is frozen-out onto dust grains. This proposition is supported by PdBI observations of continuum structures with much smaller spatial scales (0.03-0.07 pc), where <cit.> suggest that magnetic fields are playing an important role against gravitational collapse of compact continuum cores. We suggest that their conclusions on the dynamical state of compact dense cores may be extended to the larger scale ammonia filament in this study.§ CONCLUSIONS IRDC G035.39–00.33 was studied in its entirety with the Green Bank Telescope. The morphology of the cloud at this resolution resembles that of a smooth filament, more than six parsec in projected length. Multiple distinct components are present along the line of sight throughout the length of the cloud. * We derive a reliable map of the gas temperature for the entire G035.39 cloud. The extended gas reservoir in the IRDC is consistent with being of starless nature (∼ 11-15 K). We find evidence of gas heating from the embedded protostars, manifested as slight temperature increase around the positions of 70sources.* Despite having a similar angular resolution, the observations point to temperatures 2-3 K higher than that of the GBT observations, which can be attributed to the effects of line of sight contamination. We show that this offset can largely be mitigated by accounting for the background and foreground emission components. Reliance on the dust temperature maps derived from conventional FIR SED fitting with no accounting for the line of sight contributions may consistently overestimate the temperature of the dense gas.* We find the brightest velocity component of G035.39 to exhibit a smooth, consistent velocity gradient of ∼0.2 magnitude. This velocity gradient is much lower than that found on smaller scales towards G035.39-N, but is consistent with the global gas motions on parsec and GMC scales.* Additionally to the large-scale gradient across the filament, local velocity field irregularities point to a presence of substructure and smaller-scale fragmentation at scales not traced by the GBT. We suggest that an intertwined network of compact filaments might exist in the southern part of the IRDC, possibly continuing from the one found in G035.39-N. Higher angular resolution observations are needed to fully resolve the sub-structured kinematics of the cloud.We would like to thank the anonymous referee for the comments which helped to improve this manuscript. The Green Bank Observatory is a facility of the National Science Foundation operated under cooperative agreement with Associated Universities, Inc. VS, JEP, and PC acknowledge the support from the European Research Council (ERC; project PALs 320620). KW is supported by grant WA3628-1/1 of the German Research Foundation (DFG) through the priority program 1573 ("Physics of the Interstellar Medium"). IJ-S acknowledges the financial support received from the STFC through an Ernest Rutherford Fellowship (proposal number ST/L004801/2). JCT acknowledges NASA grant 14-ADAP14-0135. This research made use of Astropy, a community-developed core Python package for Astronomy <cit.>, and of APLpy, an open-source plotting package for Python <cit.>. In addition, this work made use of thepackage[<https://github.com/keflavich/dust_emissivity>] in the derivation of dust properties.
http://arxiv.org/abs/1706.08903v1
{ "authors": [ "Vlas Sokolov", "Ke Wang", "Jaime E. Pineda", "Paola Caselli", "Jonathan D. Henshaw", "Jonathan C. Tan", "Francesco Fontani", "Izaskun Jimenez-Serra", "Wanggi Lim" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170627151537", "title": "Temperature structure and kinematics of the IRDC G035.39-00.33" }
http://arxiv.org/abs/1706.08434v2
{ "authors": [ "Michal Deák", "Krzysztof Kutak", "Konrad Tywoniuk" ], "categories": [ "hep-ph", "nucl-th" ], "primary_category": "hep-ph", "published": "20170626151850", "title": "Towards tomography of quark-gluon plasma using double inclusive forward-central jets in Pb-Pb collision" }
Electron crescent distributions as a manifestation of diamagnetic drift in an electron scale current sheet [==========================================================================================================§ INTRODUCTION AND MAIN RESULTS The Cosmic Microwave Background (CMB) temperature anisotropies is one of the most important observables in cosmology. The CMB data is used for instance to constrain models of inflation <cit.>, the amount of primordial non-Gaussianity <cit.>, isocurvature perturbations <cit.>, and to extract the cosmological parameters of the ΛCDM model <cit.>. Because of that, many efforts have been made to properly understand the physics of the CMB, this physics can be separated into three stages: before, during and after the period of recombination. Regarding the latter case, we can formally split the CMB anisotropies intoprimary and secondary anisotropies. While the primary anisotropies (those already present at the emission time) are supposed to be known (for instance, by solving the Boltzmann and the Einstein's equations during and before recombination), secondary gravitational anisotropies must be obtained by solving the geodesic equation of photons in its way down to the observer. Note that additional anisotropies can arise due to secondary scatterings of photons with hot gas during the reheating period, however, this process is not covered here. Temperature anisotropies were systematically investigated for the first time by Sachs and Wolfe <cit.> in 1967 by using first-order perturbation theory, and their famous formula is quite easy to understandΔ T_oT̅_o = _e + (Φ_e - Φ_o) - ( v_e · n_e- v_o · n_o) + I_0 ,where, the subscripts e and o means that quantities must be evaluated at the emission and observation event respectively. Here , Φ and I_0 are respectively the intrinsic temperature anisotropies, the gravitational potential and the integrated Sachs-Wolfe effect (ISW). The ISW is an integrated term due to thetime variation of the metric perturbations along the path of the photon (it gives the accumulated redshift of photons when traveling along the evolving inhomogeneities). Finally, ( v_e · n_e- v_o · n_o) gives the linear Doppler effect due to the observer's and emitter's peculiar velocity (v_o and v_e), with n_o the direction of observation and - n_e the direction of emission.The CMB temperature anisotropies are so small that the previous equation gives a very good description of the observed data, at least on large scales where secondary scatterings are negligible. Second-order perturbation theory of the CMB is however important to describe in a unified picture several important effects which are not taken into account by eq:SW. These are for instance, lensing <cit.>, time delay <cit.>, Doppler modulation and aberration <cit.> and the Rees-Sciama effect <cit.>. These effects although smaller than the first-order ones are very relevant for a correct understanding of the CMB physics, so second-order perturbation theory represents an essential tool for an accurate analysis of current and future CMB data. The full second-order generalization of the Sachs-Wolfe formula was obtained in 1997 by Mollerach and Matarrese <cit.> by using a method introduced in <cit.>. Their second-order expression is somehow big and a direct interpretation of each term is difficult. Further progress in obtaining simple formulas have been given in <cit.>In the search for non-Gaussianity, second-order perturbations is enough to study the three-point function (or its Fourier counterpart, the bispectrum). However, if one wants to go to the four-point function (or the trispectrum) for instance, third-order perturbation theory is needed to fully account for all the contributions. The CMB anisotropies up to third order were first computed in <cit.> by using two methods, the first one is the same used by <cit.> in 1997, and the second one which is simpler and closer to our method, allowed them to obtain a fully non-linear Sachs-Wolfe formula for the specific case in which the metric is totally determined by two scalars variables, Φ and Ψ. Additional effort to obtain the non-linear description of the CMB can be found in <cit.>. §.§ Main results: Discussion Choosing a particular parametrization of the metric is essential for obtaining exact solutions in cosmology, and this was the case in this work. By writing the line element as s^2= a^2(η) e^2Φŝ^2 with the conformal metric given byŝ^2 ≡ - η^2+ 2β_j e^-M_jix^i η + e^-2M_ij x^ix^j,we were able to obtain an exact expression for the observed CMB temperature, T_o = T̅_o e^Θ, where T̅_o is the observed mean temperature and Θ ≡ (_e - _o) + (Φ_e - Φ_o)+ I_0(x_e,x_o) + lnγ_e1 - n_e · v_e γ_o1 - n_o · v_o.Here , Φ and I_0 are respectively, the non-linear generalization of theintrinsic temperature anisotropies, the gravitational potential and the integrated Sachs-Wolfe effect. The logarithm term corresponds to the Doppler effect, with γ the Lorentz factor. x_e = ( η, x^i)_e and x_o = ( η, x^i)_o are the spacetime coordinates of the emission and observation events, and η is the conformal time. In terms of Θ, we can easily obtain the temperature anisotropies asΔ T_oT̅_o = e^Θ - 1 = Θ + Θ^22 + ⋯ ,and in the case of perturbation theory we just need to truncate the series at the desired order. Because of the relation Θ = ln 1 + Δ T_o/ T̅_oΘ = lnT_oT̅_o= ln 1 + Δ T_oT̅_o , we will call Θ the logarithmic temperature anisotropies. Note that:* The intrinsic temperature anisotropies _e are defined through the relation (this notation was also used in <cit.>) T_e = ⟨T|_⟩e e^_e where ⟨T|_⟩e is the background temperature at the time of emission. Note also the presence of the factor _o in eq:Gsw-intro, which is absent in previous works in literature. This factor is important for two reasons: it makes the expression for Θ symmetric in the emission and observation points and it ensures the gauge invariance of Θ. Without this factor, neither Θ nor T̅_o would be gauge invariant (although the T_o would). So introducing _o ensures also the gauge invariance of the mean temperature T̅_o as it should be. The definition of _o is given in the next section. * Although T_o contains crossed terms involving the fields at the emission and observation point (for instance, at second order it contains terms of the form Φ_eΦ_o), Θ does not contain such mixed terms. That is, Θ is composed of a sum of locally defined terms. In particular the Doppler term is just: lnγ_o1 - n_o · v_o- lnγ_e1 - n_e · v_e.* Note also that, in Θ the ISW effect is clearly separated from other terms (although it is correlated with lensing sec:lensing) like Φ, and v. It makes the study of the ISW (as well as lensing) easier by directly using Θ rather than Δ T_o/ T̅_o. In previous expressions in literature (see for instance <cit.>), many integrated terms are coupled with other quantities making it difficult to isolate the ISW effect from the rest. So our results can be stated in a different way: by taking the logarithm of the temperature anisotropies we are making kind of “resummations” and removing these spurious non-linearities. This is similar to what happens in quantum field theory, in which the disconnected Feynman diagrams are removed by taking the logarithm of the propagators. Finally, since propagators in quantum mechanics are just correlation functions, we expect that the correlation functions of Θ are much simpler than those of Δ T_o/ T̅_o. For instance, by considering Δ T_o/ T̅_o instead of Θ, we are considering spurious quadratic, cubic, ..., terms which could create bias in the search for primordial non-Gaussianity.* Even if we treat Θ as a first-order quantity, Δ T_o/T̅_o will not belinear as it contains all powers of Θ. This shows that even if Θ is a Gaussian distributed quantity (which in general is not the case, but it would be a good approximation if we evolve Θ linearly from single-field initial conditions during inflation), Δ T_o/T̅_o is not Gaussian.[In particular, the off-diagonal part of the two-point correlation function does not vanish. ] Because of this, it seems better to use Θ as the variable to be studied in future CMB experiments. That is, we propose to study the maps of the logarithmic temperature anisotropies ln 1 + Δ T_o/T̅_o. Such a map will be free of Doppler modulation (see below) and other couplings[Note that the couplings induced by aberration and lensing cannot be removed by such procedure. ] which otherwise will be present in a normal map of Δ T_o/T̅_o. In practice, what is measured in an experiment like Planck are the variations in the intensityI_obs (ν,n)= 2 h ν^3 c^21exph νk_B T_o(n) - 1 ,so that Θ can be calculated directly from the variation δ I_obs without explicitly giving T_o. * If we still want to analyze the data in terms of Δ T/ T̅ rather than Θ, the theoretical n-point correlation function of Δ T/ T̅ and Θ are easily related for the specific case of a Gaussian distributed Θ (see for instance <cit.>). * Within the linear regime, it is well known that for adiabatic perturbations, in the Poisson gauge and in the large scale limit (where we can neglect v_e· n_e and the ISW term) we have =- 2 Φ/3, so that Θ≈Φ_e/3 (here without considering the contributions at the observer). It has been shown in <cit.> that this relation continues to hold at the non-linear level. So, this result in conjunction with the formula eq:Gsw-intro suggest that the metric parametrization introduced in this work and our definition of the non-linear intrinsic perturbationsare appropriated to extend the results of the linear theory to the non-perturbative level.* Since conformal transformations yield null geodesics into null geodesics, the photon's path is totally determined by the conformal metric ŝ^2. Therefore the integrated Sachs-Wolfe term I_0, as well as the lensing terms encoded into x_e and n_e are totally determined by β_i and M_ij. The explicit form of these quantities are given in the next sections.* In principle Φ, β_i and M_ij are independent quantities, but in the linear regime (and during matter domination) general relativity predicts that M^i_i/3 = 2 Φ. So by measuring Φ by an independent method like the use of the Poisson equation <cit.>, and comparing with the lensing and ISW measurements, we can test general relativity. Note that by having the non-linear version of the ISW and lensing effects we could make a better interpretation of future data, this is because even if Einstein's gravity is correct, the inadequate use of the linear approximation to analyze the data could indicate a deviation from the expected relation between Φ and M^i_i.* An immediate consequence of eq:DT2 is that Doppler modulation of the temperature anisotropies always exist regardless of the nature of the dipole. Let's explain it a bit more. Split the logarithmic anisotropies as Θ = Θ_d + Θ̃, where in a multipolar expansion Θ_d refers to the dipole of the logarithmic anisotropies and Θ̃ contains all the remaining multipolar components, that is, ℓ≥ 2. Then we see from eq:DT2 that the observed temperature anisotropies up to second order are given byΔ T_oT̅_o = Θ_d + Θ̃ + Θ̃^2 + Θ^2_d2 + Θ_dΘ̃ .The last term is what we call the Doppler modulation[The appropriate name will be dipolar modulation, but in the case in which the dipole is mainly of kinematical origin this modulation is due to the Doppler effect. We will adopt this name here because the CMB dipole is believed to be due to our peculiar velocity. ]of the temperature anisotropies, and it leads to coupling between neighbors multipolar components (ℓ, ℓ± 1) in the two-point correlation function that are proportional to the magnitude of the CMB dipole. These couplings (as well as aberration couplings) were measured by Planck[Two independent works realized that such effect could be observed by the Planck satellite, <cit.>. ] in <cit.>. The results were consistent in amplitude and direction (at the 3 σ-level) with the well known measured CMB dipole, that is, they are consistent with the prediction of simple formula Θ_dΘ̃. According to eq:DT2v2, Planck's measurements tell us nothing about the nature of the CMB dipole. However, measuring Doppler modulation is important for the following reason: suppose that a more precise measurement of Doppler modulation is made by a future CMB experiment like CoRE <cit.>, suppose also that the results show a significant deviation from the simple expectation Θ_dΘ̃, then that would imply that the term Θ̃ in eq:DT2v2 contains dipolar-like modulation couplings, and they necessarily come from primordial non-Gaussianity terms that couple the long-mode (dipolar components) with the short-modes (the higher multipoles). Such a result will rule-out single-field-inflationary models and would require a non-negligible amplitude for the dipolar components. These facts were first noted in <cit.>. Although the previous results follows immediately from eq:DT2v2, they were far from obvious by using previously existing formulas, like the one given in <cit.>. Finally, we want to mention that the conclusions of <cit.> regarding dipolar modulation were restricted to the large scale case, but here the proof holds at any scale.We want to stress that in our results we have assumed a perfect blackbody spectrum for the CMB (eq:Iobs). However, it is known that spectral distortions (deviations from the blackbody spectrum) start been relevant at second order. A non-linear treatment of theses spectral distortions was introduced in the nice paper[I am very grateful to Cyril Pitrou for let me know about the works I cite in this paragraph. ] of Stebbins <cit.>. There, Stebbins considered the observed spectrum as a superposition of blackbody with different temperatures and introduced the concept of mean logarithmic temperature which must be related to our definition of Θ. On the other hand, in the same way as we are proposing the use Θ = ln 1 + Δ T_o/T̅_o for the future CMB maps, it was also advocated in <cit.> the use of the logarithmically averaged temperature moments to describe the spectral distortions. In <cit.> it was also noted the importance of the used of the exponential notation, though they considered particular cases.Finally, note that in order to make quantitative predictions, the Sachs-Wolfe formula is not enough as we need to specify the fields , Φ, β^i, etc. as well as the integration path x^i( η). In this sense further progress is needed to obtain (analytical or numerical) non-linear solutions of the Einstein's (or Boltzmann's) equations. On the other hand, as perturbative solutions are still of high importance, in sec:SW2nd-curved we obtain the full second-order Sachs-Wolfe formula. Perturbative solutions of the metric and fluid perturbations are known in some specific cases (e.g., by assuming matter domination or the large scale limit), see for instance <cit.>. For other useful results at second order, see <cit.>.§.§ Main results: Obtaining the generalized Sachs-Wolfe formulaIn sec:geo-eq we solve the time-component of the geodesic equation, which allow us to relate the observed temperature T_o with the emission temperature T_e by a simple relation. The results are given in eq:ToTeExpI0, and can be expressed as[The explicit form of I_0 is given in eq:I0. ]T_o = T_ea_ea_o e^Φ_e - Φ_o + I_0 γ_e1 - n_e · v_e γ_o1 - n_o · v_o,where v_o (and v_e) is the peculiar velocity of the observer (and the emitter). Note that, given an observer with four-velocity u, its peculiar velocity is defined according to[Note thatv is related to v, the velocity ofcomoving-observers w.r.t u by the relation v = -v, see sec:Photons-and-obs. ]u = γ u - v, γ= - u ·u = 1√(1 - v · v ) ,where, u · v = 0 and u is the four-velocity of comoving observers. In pp.5 of <cit.> a similar result to eq:ToTeExpI0-intro was obtained but without including vector and tensor perturbations, and without considering the velocity of the emitter and observer. We now define the logarithmic intrinsic temperature anisotropies and clarify some concepts about the mean values. After that, we will get the final form of the generalized Sachs-Wolfe formula. §.§ (Logarithmic) Intrinsic temperature anisotropiesBefore the epoch of recombination, the Universe was so hot and dense that photons frequently interacted with the free electrons via Thomson scattering,[See sections 8.7.1 and 11.3.1 of <cit.> for further details on this. ]while the electrons frequently interacted with protons via Coulomb scattering, thus forming the so called photon-baryon fluid. As a result, the fluid reached a state of thermal equilibrium and the photons are well described by a blackbody distribution function. However, because of the inhomogeneities the thermal equilibrium is just local, meaning that different local observers[Here, an observer can be for instance an electron. ] in the rest frame of the fluid will measure different values for the temperature T, that is, T=T(x). During recombination the Compton scattering rate decreases and anisotropies in the photon's distribution function will appear, that is, T=T(x,n). We will write the temperature of the photon's fluid asT( x, n) = ⟨ T| ⟩e^ , where = ( x, n) ,and ⟨T|∝⟩1/a(η) is the background temperature. We will callthe logarithm perturbations. This expression is meaningful for η≤η_e, when the photons and baryons are still in equilibrium. That is,is not defined for η > η_e. Below we will provide an extension offor η > η_e, so thatis a field defined in the whole spacetime. Note that η = const is defined by the physical argument that ⟨T|=⟩ const as in Mirbabayi & Zaldarriaga <cit.>. In that sense, when transforming the CMB temperature, it is better to use gauge transformations (active transformations, acting on the fields) rather than passive transformations (transformations on the coordinates), because in the latter case the transformation of the time-coordinate becomes intricate. This issue will be discussed in details in a future work. See also <cit.> for an specific example.We stress that the mean ⟨|$⟩ is taken on the space-like 3D-hypersurfaces of constant η.However, what is important for the CMB is the mean taken on the last scattering surface [Again, ⟨T|_⟩e is relevant for defining the time of emission η_e. ]S_e,o. Here we defineS_e,oas the 2D-surface (usually thought as a deformed spherical shell) formed at the intersection between the hypersurface of η= η_eand the past light-cone of the observer. It follows from eq:ToTeExpI0-intro that the observed mean temperature is T̅_o( x_o;η_e) = a_ea_o⟨T|_⟩e e^_o , where we have defined the intrinsic temperature anisotropies at the observer's spacetime position x_oas a mean value on the last scattering surface[Mean values on S_e,o represent integrations w.r.t the direction of observation n_o. ]S_e,oe^_o≡exp_e + Φ_e - Φ_o + I_0 + lnγ_o1 - n_o · v_o- lnγ_e1 - n_e · v_e. From eq:barTo it follows that _o = _o ( x_o;η_e)transforms under gauge transformations in the same way as the logarithmic anisotropies _ebut evaluated at the observer's position. Since this definition is valid for any observer with η_o> η_e, it provides a natural extension for the field to the whole spacetime. Note however that by construction _odepends only on the spacetime position x_onot on the direction of observation n_o. This is in contrast withintrinsic temperature anisotropies _ewhich according to the discussion at the beginning of this section, could depend[In it does, as the intrinsic temperature anisotropies have a quadrupole component which act as a source for the CMB polarization <cit.>. ]on n_e. It follows from Eqs. (<ref>) and (<ref>) that the observed temperature can be written as T_o = T̅_o e^Θ, withΘ≡ (_e - _o) + (Φ_e - Φ_o)+ I_0 + lnγ_e1 - n_e · v_e γ_o1 - n_o · v_o. Finally, we define the mean temperature T̅_eof the last scattering surface asT̅_e( x_o;η_e) ≡⟨T|_⟩e e^_o . Because in general T̅_e ⟨T|_⟩e, then through the previous equation, _otell us how anisotropic the last scattering surface is (see figure <ref>). From eq:barTo it follows thatT̅_o = T̅_e a_ea_o . The previous relation is simply the statement that the mean temperature evolves only through the cosmological expansion. Additionally, the quantity Θis what we call the logarithmic CMB temperature anisotropies, eq:Gsw is the non-linear generalization of the Sachs-Wolfe formula, and I_0(given in eq:I0) is the non-linear generalization of the integrated Sachs-Wolfe effect. As we will show in sec:SW1srt, the presence of the factor _owill guarantee the gauge invariance of Θand T̅_o. eq:Gsw is the main result of this paper.In the remaining sections we do the explicit calculations and consider particular cases. So for instance, in sec:review give a quick review of fundamental concepts and introduce the notation. In sec:metric-tetrad we introduce a tetrad basis which facilitates the resolution of the geodesic equation and allow us to interpret the metric components β_ias the tetrad components of the four-velocity of comoving observers. Then we compare our results with the previous ones in literature. Firstly, we consider the first-order case in sec:SW1srt and discuss the gauge invariance. In sec:lensing we show how to obtain the lensing term up to the desired order, and then in sec:SW2nd-curved we obtain the second-order Sachs-Wolfe formula which is simpler than the previous ones in literature and then we give the conclusions.In a companion paper <cit.> we discuss the subtle issue of second-order gauge transformations on the CMB, prove the gauge invariance of our second-order formula and introduce the concept of a cosmological river-frame. Further applications of our results and comparison with existing ones will appear elsewhere <cit.>. § QUICK REVIEW OF FUNDAMENTAL CONCEPTSIn this section we quickly review some concepts which will be important to find the exact solution for the Sachs-Wolfe formula and at the same time give us a clear geometrical meaning of each terms in that formula.§.§ TetradsAn orthonormal dual tetrad a(x), is a set of dual vectors a ≡0, 1,2,3 attached to each point x^μof the spacetime in which the line element looks Minkowskian[For an introduction to the tetrads we refer the reader to <cit.>. Here, I am using the very nice notation used <cit.>. ] ds^2 = η_ab a b , and so the tetrad axes form (at each point) a locally inertial orthonormal frame. We can transform between the tetrad frame and the coordinate frame by using the matrix ab and its inverse ab,a = abx^b ,x^a = ba b .Now, the orthonormal tetrads a(that is, the duals of a) are related to the coordinate vectors a(the duals of x^a) by The dual vectors of x^a (a) are also related to the dual vectors of a (a) by a = ab b , a = ba b . Since any vector (or tensor) can be expressed in any base, we can write for instance (for a vector vand co-vector k)v = v^aa = v^b b , k = k_a x^a = k_b b , and by using the change of basis matrices, we can obtain the transformation rules for the components v^a = vb ba , k_a = kb ba , vb = v^ a ba , kb = k_a ba . The same analysis can be made for tensors. In particular, for the metric tensor we have that the components transform as: g_ ab = aμ bν g_μν, but we defined the tetrads to be orthonormal, in the sense that the metric looks Minkowskian (eq:tetradDef), therefore g_ ab = η_ ab, and we getη_ab = aμbν g_μν , g_μν = aμbνη_ab , with similar expressions for the inverse matrices g^aband η^ab. Finally, since the metric gis used to rise and lower spacetime indexes, we can easily see that the metric ηis used to rise and lower tetrad indexes, that is: va = η^ ab vb and va = η_ ab vb. §.§ Photons and observersIn this paper we use the signature -1for the metric. So, the four-velocity of a given observer satisfiesu ·u = - 1, where a “ ·” represents the scalar product between four-vectors, that is, u ·u= u^a u_a = ua ua. Two observers u_2and u_1are related by[Eqs. (<ref>)-(<ref>) are given in a series of articles that follow the so-called 1 + 3 covariant approach to general relativity. See for instance <cit.>. ] u_2= γ_(1),2 u_1 + v_(1),2 , where u_1 · v_(1),2 = 0, γ_(1),2 = - u_1· u_2 = 1√(1 - v_(1),2· v_(1),2) , and v_(1),2is the relative velocity of u_2w.r.t u_1. For an observer u^a, the four-momentum p^aof given a photon can be written as[Note that we are writing scalars in capital letters and vectors and tensors in small letters. ] p = Eu - n, with u · n = 0,E = - p · u , where Eand n^a are the observed energy and direction of arrival. Note that n ·n = 1and that d^a ≡- n^ais the direction of propagation of the photon. In the following, it will be useful to introduce the concepts of comoving-observers u_comand tetrad-comoving observers ũ, they are defined by the relationsu^i= 0, comoving-observers , ũi= 0, tetrad-comoving-observers . Note that in general, a comoving observer do not coincide with a tetrad-comoving observer. In fact, for the former the tetrads components of the four-velocity areua = a0 u^0, showing that in general uido not vanish. For tetrad-comoving-observers the energy and direction of incoming photons has a simple formẼ= - p0 , ña=0, pi/ p0 , additionally the decomposition of the four-velocity u = γ̃ ũ + ṽ is quite simpleγ̃ = u0 = √(1 + ui ui) , ṽa=0, ui/ u0 . Using these results we can relate the energy Eand direction nas observed by u, with the energy Ẽand direction ñas seen by tetrad-comoving-observers simply byE = Ẽu0 + ñ· u, ñi= ni - uiu0 - n0 . For comparison with other works in the literature, let's now relate the observed energy Eto the energy seen by comoving observers E. We can obtain two equivalent expressions: the first one is obtained by applying eq:EinTCF two times[That is, we apply two boost: one from the comoving frame to the tetrad-comoving one, and then one additional boost to the u-observer frame.]E = Eu0 + ñ· u u0+ ñ·u , and the other one follows by applying the boost directly from the comoving observer to the u-observer E = Eγ 1 + n ·v = Eγ 1 - n · v, where vis the velocity of uw.r.tu, and we have introduced the peculiar velocity v ≡- v, that is, u = γu - v . Although we will call vthe peculiar velocity of the observer, it is clear that this is not the velocity of uwith respect to u. We have introduced this concept in order to be closer to the notation used in many other works, see for instance Eq.(1) of <cit.>. It follows from the previous equations that1γ 1 - n · v= u0 + ñ· u u0 + ñ·u . Although the photons's energy has a simply form in the comoving frame,E = - p_0u^0 = - p_0√(g_00) , most of the time we prefer to work with tetrad-comoving-observers because of the nice properties given in this frame (see Eqs. (<ref>)-(<ref>)). In particular, for the direction ñwe have na = namaking it safe to use bold-notation (see below).By contrast, in the comoving frame we have (n)_0 = 0but in general n^0 0. Additionally, the physics becomes more transparent when using locally orthonormal basis (tetrads) instead of coordinatesbasis. §.§ Observed CMB temperature It is well known (see for instance pp.588 of <cit.>) that in absence of secondary scatterings the CMB temperature at the point of observation T_ois related with the temperature at emission T_eby (this is a consequence of the Liouville theorem)T_o = E_o/E_e T_e, where E_e( E_o) is the energy of photons at the emission (observation) point. Note that in general, the temperature is a function of both: the spacetime position xand direction nT_o = T( x_o,n_o) , T_e = T( x_e, n_e). The direction of emission (as seen by a local observer) is d_e =- n_e. Note that before the period of recombination it is expected that the temperature of the photon fluid is isotropic, in that sense it will not depend on the direction of emission. However, during the period of recombination a small quadrupole anisotropy arise in the photon distribution function <cit.>, that is why we kept the n_edependence in the emission temperature.By using Eqs. (<ref>) and (<ref>), the observed temperature can be written asT_o = T_ep0( x_o) p0(x_e)u0 + ñ· u _ou0 + ñ· u _e , This can also be written in bold notation asT_o = T_ep0( x_o) p0(x_e)√(1 + _o^2) + _o ·_o √(1 + _e^2) + _e ·_e , where the bold notation is used as a shorthand to express the spatial components in the tetrad basis, that is = (ñi), = ( ui), and ·=uiñi. We stress that is the observed direction of incoming photons as seen by the tetrad-comoving-observers, which is related to the direction of observation by eq:ninTCF. The previous equation is equivalent to that given in appendix A of <cit.>, although there the authors were only interested in obtaining the CMB temperature up to second order in the Poisson gauge, and by neglecting primordial vector and tensor perturbations. In this paper however, we will not use the bold-notation. § THE METRIC AND TETRAD COMPONENTS In this section we introduce the metric and tetrads which will allow us to obtain the Sachs-Wolfe formula. Note that two common notations for the metric ares^2= a^2(η)-1 + 2ϕη^2 + 2ω_ix^i η+(1 - 2 ψ) δ_ij + 2γ_ij x^ix^j, = a^2(η)- e^2Φη^2+ 2ω_ix^i η + e^-2Ψδ_ij + 2γ_ij x^ix^j, where δ_ijis the delta Kronecker tensor,x^μ=( η, x^i),ηthe conformal time,ais the scale factor andγ_ijis defined as traceless in order to make the separation of the spatial part of the metric unambiguous. Usually each quantity is expanded perturbatively into first, second, or third order perturbations. Here however, we will treat each quantity non-perturbatively. We propose to use the following parametrization of the metrics^2= a^2(η)- e^2Φη^2 + 2β_j e^Φ - Ψ - Γ_jix^i η + e^-2Ψ+ Γ_ij x^ix^j = a^2(η) e^2Φŝ^2, where Γis a symmetric and traceless matrix, and the notation Ψ+ Γreally means Ψ1 + Γwhere 1is the identity matrix, that is, Ψ+ Γ_ij = Ψδ_ij + Γ_ij. The conformal metric isŝ^2 ≡ - η^2 + 2β_j e^-M_jix^i η + e^-2M_ij x^ix^j, with M = Φ+ Ψ+ Γ. Note that indexes in β_iand M_ijare raised and lowered with δ_ij. Hereafter, we will mainly work with the conformal metric eq:Cmetric, and whenever we need to express quantities in the physical metric we just multiply by the appropriated conformal factor, as given for instance in eq:Metric-CMetric (more details below). We will still rewrite the conformal metric in a different way that will allow us to give an interesting interpretation of β_iand to easily express the metric in terms of tetrads,[This is basically the ADM decomposition of the metric. ] ŝ^2 = - β^0η ^2 + e^ - M ^j_ i x^i + β^j η e^ - M_ jk x^k + β_j η , where we have introduced β^0 ≡√(1 + β_i β^i) . It is interesting to note that null paths in the conformal s-t are[Hereafter we will use “s-t” as a short-hand for spacetime. ]also null paths in the physical s-t, that implies that the path of photons is totally determined by just two quantities: β_iand M_ij(and its derivatives, which enter the geodesic equation). This isimportant for effects like lensing, time-delay and the integrated Sachs-Wolfe (ISW).The conformal metric in the form given in eq:GPmetric provides a natural basis of orthonormal dual vectors a = aμ x^ μ, whose tetrads components are0μ = β^0 δ^0_μ , i0 = β^i,ij =e^-M ^i_ j . The tetrads for the physical s-t are obtained from the above ones, simply multiplying by the conformal factor a e^ Φ. We now note that β^aare the tetrad components of the four-velocity of comoving-observers. In fact, for a comoving observer ( u^i = 0) we haveua= ( a e^Φ)a0 u^0 = a0 = β^a , where we have multiplied by the conformal factor a e^Φin order to get quantities in the physical s-t. Additionally, we used the normalization condition to obtain a e^Φ u^0 = 1. Below, we provide some relations which will be useful in the next section. They are the inverse tetrads, a0 = 1β^0δ^0_a , 0i = - 1β^0 e^M^i_ j β^j , ji =e^M ^i_ j , and the derivative of the exponential matrix, the Baker-Campbell-Hausdorff formula (or the Zassenhaus formula) <cit.> μ e^-M = - A_μ e^-M , A_μ≡∫_0^1 s e^- s Mμ Me^s M . SEE <https://www.wikiwand.com/en/Matrix_exponential#/cite_ref-3>, §.§ The geodesic equation In order to obtain the explicit form of ñand p0needed to obtain the observed CMB temperature in eq:ToTe-tetrad, we need to solve the geodesic equation. Since photons follow null paths, we can use the conformal s-t instead of the physical s-t, this will make calculations easier. Note that, if p^μis the photon four-momentum in physical s-t, then p̂^μ = (a e^Φ)^2 p^μis the photon four-momentum in conformal s-t (Appendix D of <cit.>), consequently,p̂a = (a e^Φ) pa. Note also that according to eq:EtoCF, the direction of observation as seen in the tetrad-comoving-frame is ñi =- pi/p0 =- p̂i/p̂0. With those considerations in mind, we can now proceed to obtain the observed CMB temperature. We start with the geodesic equation in the conformal s-t <cit.>d p̂_μ/dλ = 12 μĝ_αβ p̂^αp̂^β= μaνbν p̂a p̂b , where λis an affine parameter. Using dd λ = p̂^0 dd η, and after dividing on each size by p̂0 ^2we get- 1β^0ṗ̂̇_μ/p̂0 = μaνbν p̂ap̂0 p̂bp̂0 , where we used p̂0 = - β^0 p̂^0, and a “dot” over a variable means total derivative w.r.t conformal time. By noting that p̂_0 = β^0 p̂0 + β^i p̂i , we can write1p̂0 = 1p̂_0β^0 + β^ip̂ip̂0 , and therefore the geodesic equation takes the formṗ̂̇_μ/p̂_0 = 1β^0 + β^iñiμβ^0 + ñiμβ_i +A_μ_ij β^j+ β^0ñi A_μ_ijñj , where we have used eq:Adef for A_ μ. The equation above can be integrated for μ= 0, yielding p̂_0 (x_o)= p̂_0 (x_e) e^I_0 , I_0 ≡∫_η_e^η_oηβ^0 + β^iñi0β^0 + ñi0β_i + A_ 0_ij β^j+ β^0ñi A_ 0_ijñj , which after substituting into eq:ToTe-tetrad yields (multiplying by the conformal factor)T_o = T_e a e^Φ_ea e^Φ_o e^I_0 β^0 + ñi β_i_e β^0 + ñi β_i_o u0 + ñ· u _ou0 + ñ· u _e , where we have used eq:po-p0. Note that eq:po-p0 is nothing else that the relation between the energy in the comoving frame E = - p_0, and the energy in the tetrad-comoving-frame Ẽ = - p0, that is, E= Ẽu0+ ñ·u , which follows from eq:EinTCF. We see that β^0 +ñi β_i = u 0+ ñ ·urepresents a Doppler boost. This is however, a point-to-point (along the photon's path) boost which takes the observed temperature by tetrad-comoving-observers into the observed temperature by comoving observers. On the other hand, since this boost is determined by β^i, which is directly related to the 0-icomponents of the metric, we will call this “a metric-Doppler effect”. By using eq:ToTeExpI0-ini-Eq together with eq:nTOncom we can equivalently write T_o = T_ea_ea_o e^Φ_e - Φ_o + I_0 γ_e1 - n · v _eγ_o1 - n · v _o . This is the equation we used in sec:gSW to obtain the generalized Sachs-Wolfe formula eq:Gsw. To complete the solution for the observed temperature, we need to obtain both ñiand the coordinates x^μof the photon's path. We will address this problem in sec:lensing. Finally, we can write the ISW in a covariant way by noting that a ≡ 0, β^i/β^0 , is the velocity of comoving observers w.r.t tetrad-comoving-observers (it follows from eq:vtoCF). In terms of we haveI_0 ≡∫_η_e^η_oη ·'1- ^2 + ·' + · A_0· + 1 + · . Here, we are treat (A_0)_ijas the non-vanishing components of a (space-like) tensor A_0in the tetrad-frame, that is: (A_0)ij ≡(A_0)_ijand (A_0)0 a = 0. The advantage of using tetrad-comoving-observers instead of comoving-observers is that for the former we have ñ0 = 0 so it is meaningful to use the bold-notation (ñi) =. For the latter, the bold-notation is not appropriated as β^0 0.§.§ Sachs-Wolfe at first orderIn this section we use the generalized Sachs-Wolfe formula eq:Gsw to obtain the well known results at first order. Then we will discuss the gauge invariance of our result, emphasizing the importance of the factor _o. From the definition of A_μ, eq:Adef, we have up to first order A_μ = μ M . Additionally, since we are interested in writing the observed temperature up to first order, we can take ñat zero-order as it is always multiplying first-order quantities. That has several consequences. i) We can drop the “tilde” in the direction of observation as it is the same (at zero-order) for all observers, that is, ñ = n= n, ii) all quantities are evaluated along the unperturbed path for which we can set ni_o = ni_e= ni. This is called the Born approximation, and the unperturbed path has coordinates x^i( η) = x^i_o + η_o-ηn_o^i. Under these considerations the Sachs-Wolfe formula up to first order isΘ= (_e - _o) + (Φ_e - Φ_o)+ I_0 + ln 1 - n_e · v_e 1 - n_o · v_o , I_0= ∫_η_e^η_oni β_i' + ni M_ijnj , where a “prime” means partial derivative w.r.t conformal time, and we have used β^0= 1 = γvalid up to first order.Although the components of n_oare equal to the components of n_e, that is, ni_o = ni_e, we have written n_e ·v_einstead of n_o ·v_ebecause in general, the quantity n_o ·v_eis not well defined as it represents the scalar product of two four-vectors which are defined at different points in the s-t. Remembering that M_ij = (Φ+ Ψ) δ_ij + Γ_ij(see after eq:Cmetric), and expanding the logarithm up to first order we obtainΘ= (_e - _o) + (Φ_e - Φ_o) - ( v_e · n_e- v_o · n_o) + I_0 , I_0= ∫_η_e^η_oni β_i' + Φ' + Ψ' + niΓ_ij' nj . This is (apart from the factor _o) the very well known first-order Sachs-Wolfe formula given in eq:SW. §.§ Gauge invariance We now discuss the gauge invariance of eq:SW1rst. The gauge invariance of our results up to second order are discussed in a companion paper <cit.>. There we will provide the full set of transformation rules for the metric components and additional relevant quantities. We will use the following notation: under a gauge transformation a geometrical object T(scalar, vector, tensor, connections, etc.) will transform as T →T+ ΔT. Here we just need the first-order gauge transformations induced by the gauge generatorξ^μ=( α, ξ^i), so we have[These transformation rules can also be obtained easily from the rules given in <cit.>. ]Δ= - α , ΔΦ = α'+ α , Δ v_i = - ξ_i' , Δβ_i =ξ_i'- α_,i , Δ M_ij = α' δ_ij- ξ_(i,j) , where ≡a'/ais the Hubble's expansion rate, a “comma” means derivative, so that α_,i = i α. The parenthesis in the expression ξ_(i,j)means symmetrization, so ξ_(i,j) = (ξ_i,j+ξ_j,i)/2. With these expressions, it is easy to show the gauge invariance of Θ, that is ΔΘ= 0. Indeed, for the integrated Sachs-Wolfe term, we getΔ I_0= ∫_η_e^η_oα” -ni iα'+ njξ_j” - ni iξ_j' = α' + nj ξ'_j|_e^o , where we made used of the fact that along the unperturbed path, the following relations holds 0 - ni i = /η. Additionally, we haveΔ(_e - _o) + (Φ_e - Φ_o) - ( v_e · n_e- v_o · n_o) = α' + nj ξ'_j|_o^e , showing explicitly that Θis gauge invariant. Since the full temperature T_o = T̅_o e^ Θis an observable, it has also to be gauge invariant, as a consequence the mean value T̅_oalso is. Note that this result was possible thanks to the presence of _oinside Θ. Without it, each time we perform a gauge transformation, the temperature anisotropies would acquire an additional monopole term. § THE LENSING TERMTo complete our analysis we need to obtain ñiand the coordinates x^μalong the photon's path. These quantities are needed for a fully computation of the ISW effect. Additionally, they provide the so called lensing and time-delay terms (see sec:SW2nd-Born). In this section we arrive at an expression which can by solved easily by iteration, allowing us to obtain the solution perturbatively up to the desired order. By manipulating eq:geoEq, we can obtain a differential equation for ni. We however choose to follow a different way which yields a compact expression and can be used to easily obtain the coordinates of the photon's path. We start by defining q^a(which is not a four-vector) by the relationq^a≡p̂^ap̂^0 =p^ap^0 , ⇒ q^a = x^aη =1, ẋ^i , and we remind the reader that a “hat” means that quantities belong to the conformal s-t. Now, by using ñi =- pi/ p0 = - ia p^a / (β^0 p^0)we getñi =- 1β^0β^i +e^-M^i_ j q^j . Then if we manage to obtain q^i, we automatically get both niand x^i = ∫η q^i. Therefore, we now focus on q^i. Before proceeding, we stress that the previous relation is nothing else that transformation of the direction vector, from the comoving-frame to the tetrad-comoving-observers, eq:ninTCF. That is, the previous relation can be written asñi= ni - uiu0 - n0 . Consider now the geodesic equation in conformal s-tp̂^0ṗ̂̇^a + abc q^b q^c = 0 , then by using ṗ̂̇^a/p̂^0 = q̇^a + q^a ṗ̂̇^0/ p̂^0 we get- q̇^a= q^b abc q^c - q^a q^b 0bc q^c = q ·Γ^a· q - q^aq ·Γ^0· q, where for simplicity of notation we have written on the second line q^b abc q^c = q ·Γ^a ·q , that is, we treat abcas the components of a matrix Γ^a. The relevant part of eq:qa is that for the spatial indices a = iand the a = 0component is automatically satisfied, with q^0 = 1. eq:qa is a autonomous cubic equation in q, without an obvious analytic solution.[I thank Yves Daoust user from stackexchange.com for useful comments on this point. See: <https://math.stackexchange.com/questions/2205149/non-linear-matrix-differential-equation>]This can however easily be solved perturbatively, so for instance, if we call q^a_(n)the solution up to n-order, we can immediately obtained (n + 1)-solution as- q^a_(n + 1)|_η^η_o = ∫_η^η_oη q_(n)·Γ^a - q^a_(n) Γ^0· q_(n) . We now detail the first-order solution which is needed to obtain the second-order logarithmic temperature anisotropies. §.§ Lensing term at first order As described before, we can just use eq:qa-1 to easily obtain the first-order solution for q^a. Before doing the integration, however, let's write down the integrand on the r.h.s of eq:qa in a suitable way. Let's start with abc q^b q^c= 12 ĝ ^a μ- ĝ_bc,μ + ĝ_μ b,c + ĝ_cμ,b q^b q^c = - 12ĝ ^a μĝ_bc,μ q^b q^c + ĝ ^a μ q^bĝ̇_b μ , where we have used the fact that q^c c = / η. Now, since ĝ_ab,cis already first order we can set ĝ^ab= η^abon the previous equation, so we got from eq:qa - q̇^j=- 12 q^b q^cĝ_bc,j + ĝ_bc,0 q^j - q^bĝ̇_jb + ĝ̇_0b q^j , then noting that 0 = / η- q^i i, and defining the transverse gradient asi^⊥= i - ñi ñ· , whereñ·≡ñi i , we arrive, after integration, to- q^i|_η^η_o = i + 2M ·ñi - ñi ñ· M ·ñ + ∫η i^⊥ñ· + ñ· M ·ñ|_η^η_o . Here we have used that up to second order β^i =i(see eq:tb), and we treat M_ijas the non-vanishing components of a (space-like) tensor Min the tetrad-frame, that is: Mij ≡M_ijand M0 a = 0. This is pretty much the same as we did in eq:I0 for A_0.It is clear that on the r.h.s of the previous equation we should keep ñat zero order, this fact was taken into account in passing from eq:qa-1st-Deq to eq:qi-1st-sol by setting q^i = -ñivalid at zero-order. Additionally at zero-order we have ñi_e = ñi_o = ñiand we can also remove the tilde from ñ, so that ñi = ni.Now that we are in possession of q^a, we can immediately obtain ñand x^aup to first order. Direction vector ñ up to first order In order to obtain ñ, we see from eq:ni-qi that up to first order ñi = -i + q^i + M_ij ñj , so we getñi|_η^η_o = M ·ñ^i - ñi ñ· M ·ñ + ∫η i^⊥ñ·β + ñ· M ·ñ|_η^η_o ,ñi = ñi_o - M ·ñi - ñi ñ· M ·ñ|_η^η_o - ∫_η^η_oη i^⊥ñ· + ñ· M ·ñ .Coordinates of the photon's pathSince q^a = x^a/ η, the coordinates of the photon's trajectory are simply given by x^a = ∫η q^a. There is one important point we want to stress here. Since q^idepends on the fields β^iand M_ij, the coordinates of the photon's path will depend on these quantities. That means for instance that under a gauge transformation the coordinates x^iwill necessarily change. The same happens if we consider two different realizations of the Universe, each one with the same background evolution but with different field perturbations (that is, different β^iand M_ij). On the other hand, by construction q^0 = 1, so the coordinate x^0 = ηis independent on these fields, and so x^0 = ηis insensible to any gauge transformation.[Though it is sensible to the introduction of new physical field perturbations, or different Universe realizations. This is so, because field perturbations will affect the energy-momentum tensor which determines the time-evolution via the Einstein's equations. Even if the perturbations are small, they give a back reaction on the background <cit.>. ]By construction, the value of x^0 = ηis totally determined by the background evolution of the Universe (or the FLRW spacetime), in particular it is defined by the hypersurface of constant ⟨T|$⟩ (see sec:gSW, and <cit.>). To obtain x^i, we will use ∫_η_e^η_oη∫_η^η_oη' f( η') = ∫_η_e^η_oηη - η_ef(η) ,and the relation ñi_o = - i + q^i + M_ij ñj|^η_o which is valid up to first order. So from eq:qi-1st-sol we get x^i= x^i_o + ñi -M ·ñi + ñi ñ· M ·ñ|^η_oη_o - η + ∫_η^η_oη̅ i + 2M ·ñi - ñi ñ· M ·ñ - ∫_η^η_oη̅ η̅ - ηi^⊥ñ· + ñ· M ·ñ .In Eqs. (<ref>)-(<ref>) all the integrations are along the unperturbed path. Note that we have parametrized q^i, ñi and x^i in terms of the conformal time η. This is in contrast with several other works, in which the coordinates x^μ and the four-momentum p^μ are obtained in terms of the affine parameter. See for instance Eqs. (2.20)-(2.24) of <cit.>. For comparison, note that eq:qi-1st-sol can be obtained by properly (i.e. by taking into account our eq:qDEF) dividing Eqs. (2.22) byEqs. (2.20) of <cit.>. We have now all the elements to compute the logarithmic anisotropies up to second order.§.§ Sachs-Wolfe at second order: photon's curved path In this section we expand the logarithmic anisotropies up to second order. We will keep quantities evaluated along the photons's curved path. In the next subsection, we express each quantity along the unperturbed path (the Born approximation). Let's start with the ISW. Firstly, it follows from eq:Adef that up to second order A_μ = μ M + [μ M,M]/2, then from eq:I0 we getI_0= ∫_η_e^η_oη·' + ñ· M' · + ñ·' + ñ· M' + [M',M]2·ñ 1 - ·ñ = ∫_η_e^η_oη' + ñ· M' ·ñ + ' + ñ· M' ·_⊥ ,where _⊥ is the orthogonal projection ofon ñ, that is, _⊥ =- ñ ( ñ·). We have also used the fact that ñ· [M',M] ·ñ = ñiM_ik' M_kj - M_ik M_kj' ñj = 0. From eq:Io2nd we see that there are two kind of contributions to the ISW. The term that is explicitly linear in the fields is projected along the direction ñ, while the one which is quadratic in the fields is projected in an orthogonal direction to ñ.[Of course the term ñ· M' is a vector formed by the projection of M onto ñ. In that sense, the full term ñ· M' ·_⊥ represents a “double” projection of M, one along ñ and other along _⊥ which is perpendicular direction to ñ. ] Note also that, ' + ñ· M' ·_⊥ = ' + ñ· M' _⊥· = ' + ñ· M' _⊥·_⊥, where ' + ñ· M' _⊥ is defined in the same manner as _⊥.To compute the logarithmic anisotropies, it remains to expand the Doppler effect up to second order. It can be written aslnγ_e1 - n_e · v_e γ_o1 - n_o · v_o= - v · n + v^2 -v · n ^22|_η_o^η_e= - v · n + v ·v_⊥2|_η_o^η_e .Again we see the same behavior as for the ISW effect. That is, terms that are linear in the fields (here v) are projected along[We remind the reader that n is the direction of observation in the u-frame, while ñ is the direction of observation in the tetrad-comoving-frame. ]n, while the quadratic terms only receive contribution from the orthogonal direction to n (here, v_⊥). Finally, by using eq:nTOncom (see also eq:ToTeExpI0-ini-Eq) we can also writelnγ_e1 - n_e · v_e γ_o1 - n_o · v_o= (β_i - ui) ñi + β_i β^i_⊥ - ui ui_⊥2|_η_o^η_e .Depending on the situation, one can find it more convenient to use either the first or the second version of the Doppler effect (eq:doppler2ndv1 or eq:doppler2ndv2). We will take the latter, as it involves ñ. Before going further, we write the previous equation in a covariant manner by using the velocity of comoving observer w.r.t the tetra-frameand the velocity of the observer u w.r.t the tetrad-frame, that is, a≡ 0, ui/ u0. Up to second order we have a =0, β^ianda =0, ui .Here, the subscript F is because we can think of the observer u as being a fish moving through a river (the tetrad frame). This idea is explored in a companion paper <cit.>. Joining all the previous results we have up to second orderΘ=+ Φ + ( - ) ·ñ |_η_o^η_e + ∫_η_e^η_oηñ·' + ñ· M' ·ñ + ·_⊥ - ·^⊥2|_η_o^η_e + ∫_η_e^η_oη' + ñ· M' ·_⊥ .We have written the logarithmic anisotropies in this way to stress that the first line is formally[I said formally, because here each quantity is considered up to second order. Additionally they evaluated along the photon's curved path, while at first order, Θ is computed using the background trajectory. ] equal to the first-order logarithmic anisotropies. So (formally), the difference comes only from the second part. These two lines are different in nature, so they could be measured independently. Note that - ñ is the direction of propagation of photons as seem by the tetrad-comoving-observers, so the plane perpendicular to ñ is the plane of the photon's polarization. We conclude that only the projection on the plane of polarization of the field perturbations, contribute to the explicitly quadratic terms of Θ (second line of eq:sw2nd-curved). §.§ Sachs-Wolfe at second order: the Born approximationIn this section the second-order logarithmic anisotropies are given by using the Born approximation, that is, we express each quantity along the path x^i = x^i_o + ñi_oη_o - η, which is the path inferred by the observer ignoring perturbations. This can be useful for numeric computations and also because the notation of previous results in literature (e.g., those in <cit.>) is closer to the one we use below.We will define the deviation ^i from the Born approximation by the relationx^i =x^i_o +ñi_o η_o - η + ^i. Analogously, we will write ñi = ñi_o + i. The explicit expression of ^i and i, follow directly from Eqs. (<ref>) and (<ref>). With theses definitions we can Taylor expand the logarithmic temperature anisotropies around the Born approximation's path, asΘ= Θ_Born + δ + δΦ + ñ·δ ( - ) + ( - ) ·_e + ∫_η_e^η_oη ñ·δ' + ñ·δ M' ·ñ+ ·' + 2 · M' ·ñ ,where Θ_Born is the same as eq:sw2nd-curved but with everything evaluated in the Born approximation. Here the notation must be intuitive. For instance, δ M' _ij = ^kk M_ij' with similar expressions for the other fields. The only difference is with the intrinsic logarithmic anisotropies _e which in general will depend not only on the position x_e but also on the direction of emission - ñ_e, that is, _e =(x_e, - ñ_e). So we must useδ_e = ^i_e i+ i_e ñi_e_e. §.§ Lensing and time-delayTo end this section, we remind the reader about the concepts of lensing and time-delay, which are encoded into ^i and are correlated with the ISW. To obtain the time-delay, we project ^i along the radial direction (see eq:xi-1st-sol)^iñi = ∫_η^η_oη̅ñ· + ñ· M ·ñ ,this quantity tells us that photons are not coming from a spherical shell of radius rbut from a distorted surface whose “radius” in direction ñ is distorted by (^iñi)_e. There are two types of lensing terms: the first one is given by the transverse component of ^i,^i_⊥ = -M_o ·ñ_o i_⊥η_o - η + ∫_η^η_oη̅ i_⊥ + 2M ·ñi_⊥ - ∫_η^η_oη̅ η̅ - ηi^⊥ñ· + ñ· M ·ñ ,and the second one is just the local deflection angle , which from eq:ni-1st-sol is i = -M ·ñi_⊥|_η^η_o - ∫_η^η_oη i^⊥ñ· + ñ· M ·ñ .Lensing and time-delay are correlated with the ISW effect due to the second line of eq:sw2nd-born. Note that, regarding the logarithmic anisotropies Θ, these are the only quantities that are correlated with the ISW. This is not true however for Δ T_o/T̅_o which involves powers of Θ and therefore will automatically correlate the ISW with other terms likeand Φ. Because of that, making maps of the logarithmic anisotropies will provide an optimal tool for study the ISW.Below we briefly comment on the comparison with other works and also briefly cite some of the results that will publish in a companion paper. § FUTURE WORK AND CONCLUSIONS §.§ Future work and comparison with literature The results of appendix A of <cit.> (and therefore, the results of <cit.>) are equivalent to the ones given in the previous section.[Although in <cit.> we took the perturbations to vanish at the observer position. ] In particular compare eq:sw2nd-born with Eqs. A.32-A.35 of <cit.>. In comparing the results of <cit.>, we must take into account the following relationshipi = i ++ ñ· M i_⊥ ,where the quantitywas defined in Eq.A.29 of <cit.>. The previous relation follows directly from eq:deflection and Eq.A.28 of <cit.>. This shows that the interpretation given in <cit.> for the quantityas the local deflection angle is wrong, because the true local deflection angle is given by i. Apart from this fact, the results of <cit.> are correct. A more detailed comparison of our results with those already present in literature will be discussed in a future paper <cit.>.On the other hand, since we have introduced several new concepts: a new parametrization of the metric, the logarithmic intrinsic temperature anisotropies , the direction of observation by tetrad-comoving-observers ñ, etc, the gauge transformations of theses quantities have not been discussed before in literature. In addition, gauge transformations when applied to the CMB anisotropies involves several subtle issues as it was firstly discussed in <cit.>. In a companion paper <cit.>, we will discuss the gauge transformations of the relevant quantities introduced in this paper and explicitly show the gauge invariance of our second-order formula eq:sw2nd-born. Special emphasis is put on the subtle issues of gauge transformations on the CMB.§.§ ConclusionsWe have obtained the non-linear generalization of the Sachs-Wolfe + integrated Sachs-Wolfe formula describing the CMB temperature anisotropies eq:Gsw. Our result is valid at all orders in perturbation theory, includes scalar, vector and tensor perturbations, and is valid in any gauge. Direct observational consequences of our result have been discussed, in particular the fact that the logarithmic temperature anisotropies Θ = ln 1 + Δ T_o/ T̅_o is more suitable for data analysis than the usual temperature anisotropies Δ T_o/ T̅_o. The reason is that by taking the logarithm we automatically remove many secondary effects which otherwise would bias the analysis of the data. This will be of particular importance for the search of primordial non-Gaussianity and for analysis of the ISW effect and lensing. Then we expanded our exact expression up to second order and got results which are very simple and intuitive, see Eqs. (<ref>) and (<ref>) for two different versions. Finally, several concepts have been introduced as the logarithmic intrinsic anisotropiesand _o (see sec:gSW), the tetrad-comoving-observers in sec:Photons-and-obs and an useful parametrization of the metric sec:metric-tetrad which expresses the 0 - i metric components in terms of the four-velocity of comoving observers.§ ACKNOWLEDGMENTS I thank Thiago Pereira and Elvis Soares for useful discussions and suggestions. I also thank Mauricio Calvão for introducing me to the 1 + 3 - covariant formalism, and for useful discussions on aberration. I thank Cyril Pitrou for drawing my attention on the importance of the logarithmic transform on spectral distortions. Finally, I thank the anonymous referee of paper <cit.> because his (her) useful comments somehow influenced the style of this article.JHEP2015
http://arxiv.org/abs/1706.08428v2
{ "authors": [ "Omar Roldan" ], "categories": [ "astro-ph.CO", "astro-ph.GA", "gr-qc", "physics.space-ph" ], "primary_category": "astro-ph.CO", "published": "20170626150813", "title": "CMB anisotropies at all orders: the non-linear Sachs-Wolfe formula" }
Manipulating the flow of thermal noise in quantum devices André Xuereb December 30, 2023 ========================================================= The CoNLL-SIGMORPHON 2017 shared task on supervised morphological generation required systems to be trained and tested in each of 52 typologically diverse languages. In sub-task 1, submitted systems were asked to predict a specific inflected form of a given lemma.In sub-task 2, systems were given a lemma and some of its specific inflected forms, and asked to complete the inflectional paradigm by predicting all of the remaining inflected forms. Both sub-tasks included high, medium, and low-resource conditions. Sub-task 1 received 24 system submissions, while sub-task 2 received 3 system submissions. Following the success of neural sequence-to-sequence models in the SIGMORPHON 2016 shared task, all but one of the submissions included a neural component. The results show that high performance can be achieved with small training datasets, so long as models have appropriate inductive bias or make use of additional unlabeled data or synthetic data. However, different biasing and data augmentation resulted in non-identical sets of inflected forms being predicted correctly, suggesting that there is room for future improvement.§ INTRODUCTION Morphology interacts with both syntax and phonology.As a result, explicitly modeling morphology has been shown to aid a number of tasks in human language technology (HLT), including machine translation (MT) <cit.>, speech recognition <cit.>, parsing <cit.>, keyword spotting <cit.>, and word embedding <cit.>. Dedicated systems for modeling morphological patterns and complex word forms have received less attention from the HLT community than tasks that target other levels of linguistic structure.Recently, however, there has been a surge of work in this area <cit.>, representing a renewed interest in morphology and the potential to use advances in machine learning to attack a fundamental problem in string-to-string transformations: the prediction of one morphologically complex word form from another. This increased interest in morphology as an independent set of problems within HLT arrives at a particularly opportune time, as morphology is also undergoing a methodological renewal within theoretical linguistics where it is moving towards increased interdisciplinary work and quantitative methodologies <cit.>. Pushing the HLT research agenda forward in the domain of morphology promises to lead to mutually highly beneficial dialogue between the two fields.Rich morphology is the norm among the languages of the world.The linguistic typology database WALS shows that 80% of the world's languages mark verb tense through morphology while 65% mark grammatical case <cit.>.The more limited inflectional system of English may help to explain the fact that morphology has received less attention in the computational literature than it is arguably due.The CoNLL-SIGMORPHON 2017 shared task worked to promote the development of robust systems that can learn to perform cross-linguistically reliable morphological inflection and morphological paradigm cell filling using varying amounts of training data. We note that this is also the first CoNLL-hosted shared task to focus on morphology. The task itself featured training and development data from 52 languages representing a range of language families.Many of the languages included were extremely low-resource, e.g., Quechua, Navajo, and Haida. The chosen languages also encompassed diverse morphological properties and inflection processes. Whenever possible, three data conditions were given for each language: low, medium, and high.In the inflection sub-task, these corresponded to seeing 100 examples, 1,000 examples, and 10,000 examples respectively in the training data for almost all languages. The results show that encoder-decoder recurrent neural network models (RNNs) can perform very well even with small training sets, if they are augmented with various mechanisms to cope with the low-resource setting. The shared task training, development, and test data are released publicly.[<https://github.com/sigmorphon/conll2017>] § TASK AND EVALUATION DETAILS This year's shared task contained two sub-tasks, which represented slightly different learning scenarios that might be faced by an HLT engineer or (roughly speaking) a human learner. Beyond manually vetted[Thanks to: Iñaki Alegria, Gerlof Bouma, Zygmunt Frajzyngier, Chris Harvey, Ghazaleh Kazeminejad, Jordan Lachler, Luciana Marques, and Ruben Urizar.] data for training, development and test, monolingual corpus data (Wikipedia dumps) was also provided for both of the sub-tasks.<Ref> illustrates the two tasks and defines some terminology.The CoNLL-SIGMORPHON 2017 shared task is the second shared task in a series that began with the SIGMORPHON 2016 shared task on morphological reinflection <cit.>.In contrast to 2016, it happens that both of the 2017 sub-tasks actually involve only inflection, not reinflection.[sigmorphon2016 defined the term: “Systems developed for the 2016 Shared Task had to carry out reinflection of an already inflected form.This involved analysis of an already inflected word form, together with synthesis of a different inflection of that form.”In 2016, sub-task 1 involved only inflection while sub-tasks 2–3 required reinflection.]Nonetheless, we kept “reinflection” in this year's title to make it easier to refer to the series of tasks. §.§ Sub-Task 1: Inflected Form from Lemma The first sub-task in <ref> required morphological generation with sparse training data, something that can be practically useful for MT and other downstream tasks in NLP.Here, participants were given examples of inflected forms as shown in <ref>.Each test example asked them to produce some other inflected form when given a lemma and a bundle of morphosyntactic features.The training data was sparse in the sense that it included only a few inflected forms from each lemma.That is, as in human L1 learning, the learner does not necessarily observe any complete paradigms in a language where the paradigms are large (e.g., dozens of inflected forms per lemma).[ Of course, human L1 learners do not get to observe explicit morphological feature bundles for the types that they observe.Rather, they analyze inflected tokens in context to discover both morphological features (including inherent features such as noun gender <cit.>) and paradigmatic structure (number of forms per lemma, number of expressed featural contrasts such as tense, number, person…). ]Key points: * Our sub-task 1 is similar to sub-task 1 of the SIGMORPHON 2016 shared task <cit.>, but with structured inflectional tags <cit.>, learning curve assessment, and many new typologically diverse languages, including low-resource languages. * The task is inflection: Given an input lemma and desired output tag, participants had to generate the correct output inflected form (a string). * The supervised training data consisted of individual forms (<ref>) that were sparsely sampled from a large number of paradigms.* Forms that are empirically more frequent were more likely to appear in both training and test data (see <ref> for details).* Unannotated corpus data was also provided to participants. * Systems were evaluated after training on 10^2, 10^3, and 10^4 forms. §.§ Sub-Task 2: Paradigm Completion The second sub-task in <ref> focused on paradigm completion, also known as “the paradigm cell filling problem” <cit.>.Here, participants were given a few complete inflectional paradigms as training data.At test time, partially filled paradigms, i.e. paradigms with significant gaps in them, were to be completed by filling out the missing cells.<Ref> gives examples.Thus, sub-task 2 requires predicting many inflections of the same lemma.Recall that sub-task 1 also required the system to predict several inflections of the same lemma (when they appear as separate examples in test data).However, in sub-task 2, one of our test-time evaluation metrics (<ref>) is full-paradigm accuracy.Also, the sub-task 2 training data provides full paradigms, in contrast to sub-task 1 where it included only a few inflected forms per lemma.Finally, at test time, sub-task 2 presents each lemma along with some of its inflected forms, which is potentially helpful if the lemma had not appeared previously in training data.Apart from the theoretical interest in this problem <cit.>, this sub-task is grounded in the practical problem of extrapolation of basic resources for a language, where only a few complete paradigms may be available from a native speaker informant <cit.> or a reference grammar.L2 classroom instruction also asks human students to memorize example paradigms and generalize from them.Key points: * The training data consisted of complete paradigms.* Not all paradigms within a language have the same shape.A noun lemma will have a different set of cells than a verb lemma does, and verbs of different classes (e.g., lexically perfective vs. imperfective) may also have different sets of cells. * The task was paradigm completion: given a sparsely populated paradigm, participants should generate the inflected forms (strings) for all missing cells. * The task simulates learning from compiled grammatical resources and inflection tables, or learning from a limited time with a native-language informant in a fieldwork scenario. * Three training sets were given, building up in size from only a few complete paradigms to a large number (dozens).§.§ Evaluation Each team participating in a given sub-task was asked to submit 156 versions of their system, where each version was trained using a different training set (3 training sizes × 52 languages) and its corresponding development set.We evaluated each submitted system on its corresponding test set, i.e., the test set for its language.We computed three evaluation metrics: (i) Overall 1-best test-set accuracy, i.e., is the predicted paradigm cell correct?(ii) average Levenshtein distance, i.e., how badly does the predicted form disagree with the answer? (iii) Full-paradigm accuracy, i.e., is the complete paradigm correct?This final metric only truly makes sense in sub-task 2, where full paradigms are given for evaluation. For each sub-task, the three data conditions (low, medium, and high) resulted in a learning curve.For each system in each condition, we report the average metrics across all 52 languages.§ DATA§.§ Languages The data for the shared task was highly multilingual, comprising 52 unique languages.Data for 47 of the languages came from the English edition of Wiktionary, a large multi-lingual crowd-sourced dictionary containing morphological paradigms for many lemmata.[<https://en.wiktionary.org/>(08-2016 snapshot)] Data for Khaling, Kurmanji Kurdish, and Sorani Kurdish was created as part of the Alexina project <cit.>.[<https://gforge.inria.fr/projects/alexina/>] Novel data for Haida, a severely endangered North American language isolate, was prepared by Jordan Lachler (University of Alberta).The Basque language data was extracted from a manually designed finite-state morphological analyzer <cit.>.The shared task language set is genealogically diverse, including languages from 10 language stocks. Although the majority of the languages are Indo-European, we also include two language isolates (Haida and Basque) along with languages from Athabaskan (Navajo), Kartvelian (Georgian), Quechua, Semitic (Arabic, Hebrew), Sino-Tibetan (Khaling), Turkic (Turkish), and Uralic (Estonian, Finnish, Hungarian, and Northern Sami). The shared task language set is also diverse in terms of morphological structure, with languages which use primarily prefixes (Navajo), suffixes (Quechua and Turkish), and a mix, with Spanish exhibiting internal vowel variations along with suffixes and Georgian using both infixes and suffixes. The language set also exhibits features such as templatic morphology (Arabic, Hebrew), vowel harmony (Turkish, Finnish, Hungarian), and consonant harmony (Navajo) which require systems to learn non-local alternations. Finally, the resource level of the languages in the shared task set varies greatly, from major world languages (e.g. Arabic, English, French, Spanish, Russian) to languages with few speakers (e.g. Haida, Khaling). §.§ Data Format For each language, the basic data consists of triples of the form (lemma, feature bundle, inflected form), as in <ref>. The first feature in the bundle always specifies the core part of speech (e.g., verb). All features in the bundle are coded according to the UniMorph Schema, a cross-linguistically consistent universal morphological feature set <cit.>. §.§ Extraction from Wiktionary For each of the 47 Wiktionary languages, Wiktionary provides a number of tables, each of which specifies the full inflectional paradigm for a particular lemma.These tables were initially extracted via a multi-dimensional table parsing strategy <cit.>.As noted in <ref>, different paradigms may have different shapes. To prepare the shared task data, each language's parsed tables from Wiktionary were grouped according to their tabular structure and number of cells.Each group represents a different type of paradigm (e.g., verb).We used only groups with a large number of lemmata, relative to the number of lemmata available for the language as a whole. For each group, we associated a feature bundle with each cell position in the table, by manually replacing the prose labels describing grammatical features (e.g.  “accusative case”) with UniMorph features (e.g. acc). This allowed us to extract triples as described in the previous section.By applying this process across the 47 languages, we constructed a large multilingual dataset that refines the parsed tables from previous work. This dataset was sampled to create appropriately-sized data for the shared task, as described in <ref>.[Full, unsampled Wiktionary parses are made available at <unimorph.org> on a rolling basis.] Full and sampled dataset sizes by language are given in <ref>.Systematic syncretism is collapsed in Wiktionary.For example, in English, feature bundles do not distinguish between different person/number forms of past tense verbs, because they are identical.[In this example, Wiktionary omits the single exception: the lemma be distinguishes between past tenses was and were.]Thus, the past-tense form went appears only once in the table for go, not six times, and gives rise to only one triple, whose feature bundle specifies past tense but not person and number.§.§ Sampling the Train-Dev-Test Splits From each language's collection of paradigms, we sampled the training, development, and test sets as follows.These datasets can be obtained from <http://www.sigmorphon.org/conll2017>.Our first step was to construct probability distributions over the (lemma, feature bundle, inflected form) triples in our full dataset.For each triple, we counted how many tokens the inflected form has in the February 2017 dump of Wikipedia for that language.Note that this simple “string match” heuristic overestimates the count, since strings are ambiguous: not all of the counted tokens actually render that feature bundle.[For example, in English, any token of the string walked will be double-counted as both the past tense and the past participle of the lemma walk. This problem holds for all regular English verbs.Similarly, when we are counting the present-tense tokens lay of the lemma lay, we will also include tokens of the string lay that are actually the past tense of lie, or are actually the adjective or noun senses of lay.The alternative to double-counting each ambiguous token would have been to use EM to split the token's count of 1 unequally among its possible analyses, in proportion to their estimated prior probabilities <cit.>.]From these counts, we estimated a unigram distribution over triples, using Laplace smoothing (add-1 smoothing).We then sampled 12000 triples without replacement from this distribution.The first 100 were taken as the low-resource training set for sub-task 1, the first 1000 as the medium-resource training set, and the first 10000 as the high-resource training set. Note that these training sets are nested, and that the highest-count triples tend to appear in the smaller training sets.The final 2000 triples were randomly shuffled and then split in half to obtain development and test sets of 1000 forms each.The final shuffling was performed to ensure that the development set is similar to the test set.By contrast, thedevelopment and test sets tend to contain lower-count triples than the training set.[This is a realistic setting, since supervised training is usually employed to generalize from frequent words that appear in annotated resources to less frequent words that do not.Unsupervised learning methods also tend to generalize from more frequent words (which can be analyzed more easily by combining information from many contexts) to less frequent ones.] In those languages where we have less than 12000 total forms, we omit the high-resource training set (all languages have at least 3000 forms).To sample the data for sub-task 2, we perform a similar procedure. For each paradigm in our full dataset, we counted the number of tokens in Wikipedia that matched any of the inflected forms in the paradigm.From these counts, we estimated a unigram distribution over paradigms, using Laplace smoothing. We sampled 300 paradigms without replacement from this distribution. The low-resource training sets contain the first 10 paradigms, the medium-resource training set contains the first 50, and high-resource training set contains the first 200. Again, these training sets are nested. Note that since different languages have paradigms of different sizes, the actual number of training exemplars may differ drastically.With the same motivation as before, we shuffled the remaining 100 forms and took the first 50 as development and the next 50 as test.(In those languages with fewer than 300 forms, we again omitted the high-resource training setting.) For each development or test paradigm, we chose about 1/5 of the slots to provide to the system as input along with the lemma, asking the system to predict the remaining 4/5. We determined which cells to keep by independently flipping a biased coin with probability 0.2 for each cell.Because of the count overestimates mentioned above, our sub-task 1 dataset overrepresents triples where the inflected form (the answer) is ambiguous, and our sub-task 2 dataset overrepresents paradigms that contain ambiguous inflected forms.The degree of ambiguity varied among languages: the average number of triples per inflected form string ranged from 1.00 in Sorani to 2.89 in Khaling, with an average of 1.43 across all languages.Despite this distortion of true unigram counts, we believe that our datasets captured a sufficiently broad sample of the feature combinations for every language.§ PREVIOUS WORKMost recent work in inflection generation has focused on sub-task 1, i.e., generating inflected forms from the lemma. Numerous, methodologically diverse approaches have been published. We highlight a representative sample of recent work. durrett2013supervised heuristically extracted transformation rules and trained a semi-Markov model <cit.> to learn when to apply them to the input. nicolai2015inflection trained a discriminative string-to-string monotonic transduction tool—DirecTL+ <cit.>—to generate inflections. ahlberg2014semi reduced the problem to multi-class classification, where they used finite-state techniques to first generalize inflectional patterns and then trained a feature-rich classifier to choose the optimal such pattern to inflect unseen words <cit.>. Finally, malouf2016, faruqui2015morphological and kann-schutze:2016:P16-2 proposed a neural-based sequence-to-sequence models <cit.>, withmaking use of an attention mechanism <cit.>. Overall, the neural approaches have generally been found to be the most successful.Some work has also focused on scenarios similar to sub-task 2.For example, dreyer-eisner:2009:EMNLP modeled the distribution over the paradigms of a language as a Markov Random Field (MRF), where each cell is represented as a string-valued random variable.The MRF's factors are specified as weighted finite-state machines of the form given by dreyer-smith-eisner:2008:EMNLP. Building upon this, TACL480 proposed using a Bayesian network where both lemmata (repeated within a paradigm) and affixes (repeated across paradigms) were encoded as string-valued random variables.That work required its finite-state transducers to take a more restricted form <cit.> for computational reasons. Finally, kann-cotterell-schutze:2017:EACLlong proposed a multi-source sequence-to-sequence network, allowing a neural transducer to exploit multiple source forms simultaneously. SIGMORPHON 2016 Shared Task. Last year, the SIGMORPHON 2016 shared task (<http://sigmorphon.org/sharedtask>) focused on 10 languages (including 2 surprise languages). As for the present 2017 task, most of the 2016 data was derived from Wiktionary. The 2016 shared task had submissions from 9 competing teams with members from 11 universities.As mentioned in <ref>, our sub-task 1 is an extension of sub-task 1 from 2016.The other sub-tasks in 2016 focused on the more general reinflection problem, where systems had to learn to map from any inflected form to any other with varying degrees of annotations. See sigmorphon2016 for details.§ THE BASELINE SYSTEM The shared task provided a baseline system to participants that addressed both tasks and all languages.The system was designed for speed of application and also for adequate accuracy with little training data, in particular in the low and medium data conditions. The design of the baseline was inspired by the University of Colorado's submission <cit.> to the SIGMORPHON 2016 shared task. §.§ Alignment For each (lemma, feature bundle, inflected form) triple in training data, the system initially aligns the lemma with the inflected form by finding the minimum-cost edit path.Costs are computed with a weighted scheme such that substitutions have a slightly higher cost (1.1) than insertions or deletions (1.0).For example, the German training data pair schielen-geschielt to squint (going from the lemma to the past participle) is aligned as:The system now assumes that each aligned pair can be broken up into a prefix, stem and a suffix, based on where the inputs or outputs have initial or trailing blanks after alignment. We assume that initial or trailing blanks in either input or output reflect boundaries between a prefix and a stem, or a stem and a suffix.This allows us to divide each training example into three parts.Using the example above, the pairs would be aligned as follows, after padding the edges with $-symbols: prefix stem suffix $ schielen$ $ge schielt $§.§ Inflection Rules From this alignment, the system extracts a prefix-changing rule based on the prefix pairing, as well as a set of suffix-changing rules based on suffixes of the stem+suffix pairing.The example alignment above yields the eight extracted suffix-modifying rules n$ → $ ielen$ → ielt$ en$ → t$ hielen$ → hielt$ len$ → lt$ chielen$ → chielt$ elen$ → elt$ schielen$ → schielt$ as well as the prefix-modifying rule $ → $ge.Since these rules were obtained from the triple (schielen, V;V.PTCP;PST, geschielt), they are associated with a token of the feature bundle V;V.PTCP;PST. §.§ Generation At test time, to inflect a lemma with features, the baseline system applies rules associated with training tokens of the precise feature bundle.There is no generalization across bundles that share features.Specifically, the longest-matching suffix rule associated with the feature bundle is consulted and applied to the input form.Ties are broken by frequency, in favor of the rule that has occurred most often with this feature bundle.After this, the prefix rule that occurred most often with the bundle is likewise applied.That is, the prefix-matching rule has no longest-match preference, while the suffix-matching rule does.For example, to inflect kaufen to buy with the features V;V.PTCP;PST, using the single example above as training data, we would find that the longest matching stored suffix-rule is en$ → t$, which would transform kaufen into an intermediate form kauft, after which the most frequent prefix-rule, $ → $ge would produce the final output gekauft.If no rules have been associated with a particular feature bundle (as often happens in the low data condition), the inflected form is simply taken to be a copy of the lemma.In sub-task 2, paradigm completion, the baseline system simply repeats the sub-task 1 method and generates all the missing forms independently from the lemma.It does not take advantage of the other forms that are presented in the partially filled paradigm.In addition to the above, the baseline system uses a heuristic to place a language into one of two categories: largely prefixing or largely suffixing. Some languages, such as Navajo, are largely prefixing and have more complex changes in the left periphery of the input rather than at the right. However, in the method described above, the operation of the prefix rules is more restricted than that of the suffix rules: prefix rules tend to perform no change at all, or insert or delete a prefix.For largely prefixing languages, the method performs better when operating with reversed strings. Classifying a language into prefixing or suffixing is done by simply counting how often there is a prefix change vs. suffix change in going from the lemma form to the inflected form in the training data. Whenever a language is found to be largely prefixing, the system works with reversed strings throughout to allow more expressive changes in the left edge of the input.§ SYSTEM DESCRIPTIONSThe CoNLL-SIGMORPHON 2017 shared task received submissions from 11 teams with members from 15 universities and institutes (<ref>). Many of the teams submitted more than one system, yielding a total of 25 unique systems entered including the baseline system.In contrast to the 2016 shared task, all but one of the submitted systems included a neural component. Despite the relative uniformity of the submitted architectures, we still observed large differences in the individual performances. Rather than differences in architecture, a major difference this year was the various methods for supplying the neural network with auxiliary training data.For ease of presentation, we break down the systems into the features of their system (see <ref>) and discuss the systems that had those features. In all cases, further details of the methods can be found in the system description papers, which are cited in <ref>. Neural Parameterization. All systems except for the EHU team employed some form of a neural network. Moreover, all teams except for SU-RUG, which employed a convolutional neural network, made use of some form of gated recurrent network—either a gated recurrent network (GRU) <cit.> or long short-term memory (LSTM) <cit.>. In these neural models, a common strategy was to feed in the morphological tag of the form to be predictedalong with the input into the network, where each subtag was its own symbol. Hard Alignment versus Soft Attention. Another axis, along which the systems differ is the use of hard alignment, over soft attention. The neural attention mechanism was introduced in DBLP:journals/corr/BahdanauCB14 for neural machine translation (NMT). In short, these mechanisms avoid the necessity of encoding the input word into a fixed length vector, by allowing the decoder to attend to different parts of the inputs. Just as in NMT, the attention mechanism has led to large gains in morphological inflection. The CMU, CU, IIT (BHU), LMU, UE-LMU, UF and UTNII systems all employed such mechanisms.An alternative to soft attention is hard, monotonic alignment, i.e., a neural parameterization of a traditional finite-state transduction system. These systems enforce a monotonic alignment between source and target forms. In the 2016 shared task <cit.> such a system placed second <cit.>, and this year's winning system—CLUZH—was an extension of that one. (See, also, Aharoni for a further explication of the technique and rastogi-cotterell-eisner:2016:N16-1 for discussion of a related neural parameterization of a weighted finite-state machine.) Their system allows for explicit biasing towards a copy action that appears useful in the low-resource setting. Despite its neural parameterization, the CLUZH system is most closely related to the systems of UA and EHU, which train weighted finite-state transducers, albeit with a log-linear parameterization. Reranking. Reranking the output of a weaker system was a tack taken by two systems: ISI and UA.The ISI system started with a heuristically induced candidate set, using the edit tree approach described by CHRUPALA08.594, and then chose the best edit tree. This approach is effectively a neuralized version of the lemmatizer proposed in muller-EtAl:2015:EMNLP and, indeed, was originally intended for that task <cit.>. The UA team, following their 2016 submission, proposed a linear reranking on top of the k-best output of their transduction system. Data Augmentation. Many teams made use of auxiliary training data—unlabeled or synthetic forms. Some teams leveraged the provided Wikipedia corpora (see <ref>). The UE-LMU team used these unlabeled corpora to bias their methods towards copying by transducing an unlabeled word to itself.The same team also explored a similar setup that instead learned to transduce random strings to themselves, and found that using random strings worked almost as well as words that appeared in unlabeled corpora. CMU used a variational autoencoder and treated the tags of unannotated words in the Wikipedia corpus as latent variables (see multi-space for more details). Other teams attempted to get silver-standard labels for the unlabeled corpora.For example, the UA team trained a tagger on the given training examples, and then tagged the corpus with the goal to obtain additional instances, while the UE-LMU team used a series of unsupervised heuristics. The CU team—which did not make use of external resources—hallucinated more training data by identifying suffix and prefix changes in the given training pairs and then using that information to create new artificial training pairs. The LMU submission also experimented with hand-written rules to artificially generate more data. It seems likely that the primary difference in the performance of the various neural systems lay in these strategies for the creation of new data to train the parameters, rather than in the neural architectures themselves.§ PERFORMANCE OF THE SYSTEMS Relative system performance is described in <ref>, which show the average per-language accuracy of each system by resource condition, for each of the sub-tasks. The table reflects the fact that some teams submitted more than one system (e.g. LMU-1 & LMU-2 in the table). Learning curves for each language across conditions are shown in <ref>, which indicates the best per-form accuracy achieved by a submitted system. Full results can be found in <ref>, including full-paradigm accuracy.Three teams exploited external resources in some form: UA, CMU, and UE-LMU. In general, any relative performance gained was minimal. The CMU system was outranked by several systems that avoided external resource use in the High and Medium conditions in which it competed. UE-LMU only submitted a system that used additional resources in the Medium condition, and saw gains of ∼%1 compared to their basic system, while it was still outranked overall by CLUZH. In the Low condition, UA saw gains of ∼%3 using external data. However, all UA submissions were limited to a small handful of languages.All but one of the systems submitted were neural. As expected given the results from SIGMORPHON 2016, these systems perform very well when in the High training condition where data is relatively plentiful. In the Low and Medium conditions, however, standard encoder-decoder architectures perform worse than the baseline using only the training data provided. Teams that beat the baseline succeeded by biasing networks towards the correct solutions through pre-training on synthetic data designed to capture the overall inflectional patterns in a language. As seen in <ref>, these techniques worked better for some languages than for others. Languages with smaller, more regular paradigms were handled well (e.g., English sub-task 1 low-resource accuracy was at 90%). Languages with more complex systems, like Latin, proved more challenging (the best systemachieved only 19% accuracy in the low condition). For these languages, it is possible that the relevant variation required to learn a best per-form inflectional pattern was simply not present in the limited training data, and that a language-specific learning bias was required.Even though the top-ranked systems do well on their own, different systems may contain some amount of complementary information, so that an ensemble over multiple approaches has a chance to improve accuracy. We present an upper bound on the possible performance of such an ensemble. <ref> and <ref> include an “Ensemble Oracle” system (oracle-e) that gives the correct answer if any of the submitted systems is correct. The oracle performs significantly better than any one system in both the Medium (∼10%) and Low (∼15%) conditions. This suggests that the different strategies used by teams to “bias” their systems in an effort to make up for sparse data lead to substantially different generalization patterns.For sub-task 1, we also present a second “Feature Combination” Oracle (oracle-fc) that gives the correct answer for a given test triple iff its feature bundle appeared in training (with any lemma). Thus, oracle-fc provides an upper bound on the performance of systems that treat a feature bundle such as V;SBJV;FUT;3;PL as atomic. In the low-data condition, this upper bound was only 71%, meaning that 29% of the test bundles had never been seen in training data. Nonetheless, systems should be able to make some accurate predictions on this 29% by decomposing each test bundle into individual morphological features such as FUT (future) and PL (plural), and generalizing from training examples that involve those features. For example, a particular feature or sub-bundle might be realized as a particular affix.Several of the systems treated each individual feature as a separate input to the recurrent network, in order to enable this type of generalization. In the medium data condition for some languages, these systems sometimes far surpassed oracle-fc.The most notable example of this is Basque, where oracle-fc produced a 47% accuracy while six of the submitted systems produced an accuracy of 85% or above. Basque is an extreme example with very large paradigms for the verbs that inflect in the language (only a few dozen common ones do).This result demonstrates the ability of the neural systems to generalize and correctly inflect according to unseen feature combinations.§ FUTURE DIRECTIONSAs regards morphological inflection, there is a plethora of future directions to consider. First, one might consider morphological transductions over pronunciations, rather than spellings.This is more challenging in the many languages (including English) where the orthography does not reflect the phonological changes that accompany morphological processes such as affixation.Orthography usually also does not reflect predictable allophonic distinctions in pronunciation <cit.>, which one might attempt to predict, such as the difference in aspiration of /t/ in English [thAp] (top) vs. [stAp] (stop).A second future direction involves the effective incorporation of external unannotated monolingual corpora into the state-of-the-art inflection or reinflection systems. The best systems in our competition did not make use of external data and those that did make heavy use of such data, e.g., the CMU team, did not see much gain.The best way to use external corpora remains an open question; we surmise that they can be useful, especially in the lower-resource cases. A related line of inquiry is the incorporation of cross-lingual information, which kann-cotterell-schutze:2017:P17 did find to be helpful.A third direction revolves around the efficient elicitation of morphological information (i.e., active learning). In the low-resource section, we asked our participants to find the best approach to generate new forms given existing morphological annotation. However, it remains an open question, which of the cells in a paradigm are best to collect annotation for in the first place. Likely, it is better to collect diagnostic forms that are closer to principal parts of the paradigm <cit.>as these will contain enough information such that the remaining transformations are largely deterministic. Experimental studies however suggest that speakers also strongly rely on pattern frequencies for inferring unknown forms <cit.>. Another interesting direction would therefore also include the organization of data according to plausible real frequency distributions (especially in spoken data) and exploring possibly varying learning strategies associated with lexical items of various frequencies. Finally, there is a wide variety of other tasks involving morphology. While some of these have had a shared task, e.g., the parsing of morphologically-rich languages <cit.> and unsupervised morphological segmentation <cit.>, many have not, e.g., supervised morphological segmentation and morphological tagging. A key purpose of shared tasks in the NLP community is the preparation and release of standardized data sets for fair comparison among methods. Future shared tasks in other areas of computational morphology would seem in order, giving the overall effectiveness of shared tasks in unifying research objectives in subfields of NLP, and as a starting point for possible cross-over with cognitively-grounded theoretical and quantitative linguistics.§ CONCLUSION The CoNLL-SIGMORPHON shared task provided an evaluation on 52 languages, with large and small datasets, of systems for inflection and paradigm completion—two core tasks in computational morphological learning.On sub-task 1 (inflection), 24 systems were submitted, while on sub-task 2 (paradigm completion), 3 systems were submitted. All but one of the systems used rather similar neural network models, popularized by the SIGMORPHON shared task in 2016.The results reinforce the conclusions of the 2016 shared task that encoder-decoder architectures perform strongly when training data is plentiful, with exact-match accuracy on held-out forms surpassing 90% on many languages; we note there was a shortage of non-neural systems this year to compare with.In addition, and contrary to common expectation, many participants showed that neural systems can do reasonably well even with small training datasets.A baseline sequence-to-sequence model achieves close to zero accuracy: e.g., silfverberg-EtAl:2017:K17-20 reported that all the team's neural models on the low data condition delivered accuracies in the 0-1% range without data augmentation, and other teams reported similar findings. However, with judicious application of biasing and data augmentation techniques, the best neural systems achieved over 50% exact-match prediction of inflected form strings on 100 examples, and 80% on 1,000 examples, as compared to 38% for a baseline system that learns simple inflectional rules.It is hard to say whether these are “good” results in an absolute sense.An interesting experiment would be to pit the small-data systems against human linguists who do not know the languages, to see whether the systems are able to identify the predictive patterns that humans discover (or miss).An oracle ensembling of all systems shows that there is still much room for improvement, in particular in low-resource settings. We have released the training, development, and test sets, and expect these datasets to provide a useful benchmark for future research into learning of inflectional morphology and string-to-string transduction.§ ACKNOWLEDGEMENTS The first author would like to acknowledge the support of an NDSEG fellowship.Google provided support for the shared task in the form of an award. Several authors (CK, DY, JSG, MH) were supported in part by the Defense Advanced Research Projects Agency (DARPA) in the program Low Resource Languages for Emergent Incidents (LORELEI) under contract No. HR0011-15-C-0113. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA).acl_natbib
http://arxiv.org/abs/1706.09031v2
{ "authors": [ "Ryan Cotterell", "Christo Kirov", "John Sylak-Glassman", "Géraldine Walther", "Ekaterina Vylomova", "Patrick Xia", "Manaal Faruqui", "Sandra Kübler", "David Yarowsky", "Jason Eisner", "Mans Hulden" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20170627200234", "title": "CoNLL-SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection in 52 Languages" }
Transactions on Visualization and Computer Graphics, Vol. ??, No. ??, ?? ?? Huang et al.: Way to Go! Automatic Optimization of Wayfinding Design[]0000–0000/00/$00.00  2017 IEEE []Published by the IEEE Computer Society1.0Wayfinding signs play an important role in guiding users to navigate in a virtual environment and in helping pedestrians to find their ways in a real-world architectural site. Conventionally, the wayfinding design of a virtual environment is created manually, so as the wayfinding design of a real-world architectural site. The many possible navigation scenarios, as well as the interplay between signs and human navigation, can make the manual design process overwhelming and non-trivial. As a result, creating a wayfinding design for a typical layout can take months to several years <cit.>. In this paper, we introduce the Way to Go! approach for automatically generating a wayfinding design for a given layout. The designer simply has to specify some navigation scenarios; our approach will automatically generate an optimized wayfinding design with signs properly placed considering human agents' visibility and possibility of making mistakes during a navigation. We demonstrate the effectiveness of our approach in generating wayfinding designs for different layouts such as a train station, a downtown and a canyon. We evaluate our results by comparing different wayfinding designs and show that our optimized wayfinding design can guide pedestrians to their destinations effectively and efficiently. Our approach can also help the designer visualize the accessibility of a destination from different locations, and correct any “blind zone” with additional signs. 1.0 wayfinding, navigation, procedural modeling, level design, spatial orientationWay to Go! Automatic Optimization of Wayfinding Design Haikun Huang, Ni-Ching Lin, Lorenzo Barrett, Darian Springer,Hsueh-Cheng Wang, Marc Pomplun, Lap-Fai Yu, Member, IEEEH. Huang, L. Barrett, D. Springer, M. Pomplun and L.-F. Yu are with the University of Massachusetts Boston. N.-C. Lin and H.-C. Wang are with the National Chiao Tung University.Manuscript received ?? ??, 2017; revised ?? ??, 2017. December 30, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTIONImagine walking in a subway station with no wayfinding signs. How could you walk to the right platform after you buy your ticket? After some random trials, you might finally find your way to the platform, but this probably would not be a pleasant experience. You would have saved much time and energy if wayfinding signs had been placed properly in the environment to guide you through. A layout with no wayfinding signs is as confusing as a maze.In “The VR Book” <cit.>, Jerald points out that wayfinding aids are especially important in virtual environments because it is very easy to get disoriented throughout a navigation in a virtual space. A well-constructed environment should include environmental wayfinding aids thoughtfully put by the level designers, considering the possible navigation and the navigation goals of the user. Recently, interesting experiments by Darken and Peterson <cit.> verify that most users would feel rather uncomfortable being in a largely void virtual environment, and that it is important to regularly reassure the users that they are not lost throughout a navigation.Conventionally, level designers mainly rely on experience or a “common sense approach” <cit.> in creating a wayfinding design. Given an environment, they think of all likely navigation scenarios that the user will go through and then place wayfinding signs or other aids to guide the user accordingly. For example, for a train station, one common scenario is to walk from the ticket machine, through the gate, and then to the right platform. Another common scenario is to walk from the platform to the exit. Directional signs are then placed along the routes. While this design approach is straightforward, the efforts required will quickly become daunting when the number of scenarios scales up as in a real-world situation. For example, a real-world train station typically involves tens or more navigation scenarios. Moreover, when placing the signs, it is necessary to consider the user's visibility and the fact that the user may miss a sign or make mistakes throughout the navigation. Designing a wayfinding scheme that jointly considers all these factors is highly non-trivial and challenging, while a sub-optimal wayfinding scheme may easily result in a confusing and frustrating navigation experience of the users.To tackle these problems, in this work we introduce a novel computational approach to automatically generate a wayfinding design for a given environment. To use our approach, the designer simply specifies all the navigation scenarios likely to be taken by the users. Our approach will then generate a wayfinding design to accommodate the needs of all the scenarios while considering a number of desirable factors relevant to the navigation experience and management convenience. Through agent-based simulations, our approach further refines the locations of the wayfinding signs by considering visibility and robustness with respect to the possible mistakes made by the users throughout their navigation. After generating a wayfinding design, the designer can gain further insights of the design by visualizing the accessibility of a destination from any other locations in the environment, and remove any blind zones (if necessary) by adding more signs and re-triggering the optimization.In a real-world architectural site, typical wayfinding aids include signs, landmarks and GPS-based mobile navigation system. In a virtual environment, additional virtual wayfinding aids such as compasses <cit.> and mini-maps <cit.> can also be used to facilitate wayfinding. In this work, we focus on generating signs to guide the user because: 1) signs are a very common and universal mean for wayfinding; 2) signs as wayfinding aids are direct yet subtle—the user usually does not need to stop walking while reassuring his direction with a sign he sees on his way, in contrast to using other wayfinding aids such as a map which requires the user to stop his locomotion; 3) signs integrate naturally with most indoor and outdoor environments.The major contributions of our work include: * introducing a novel optimization and agent-based approach for automatically generating wayfinding designs. * demonstrating the capability of our approach for generating wayfinding designs for different layouts. * showing how our approach can be further applied for visualizing and editing a generated wayfinding design. * evaluating the effectiveness of our automatically generated wayfinding designs in guiding the navigation of users by comparing with other wayfinding designs. Additionally, we implement our approach as a handy plugin of the Unity game engine, which can be used by game level designers to automatically and quickly generate wayfinding schemes for their virtual worlds, hence saving their time and manual efforts spent on determining users' paths and placing wayfinding signs. We will release the plugin for public use.§ RELATED WORK To the best of our knowledge, there is no existing work on automatically generating wayfinding designs for a given layout. We review some relevant work in wayfinding design for real-world and virtual environments. We also review some work in sign perception, navigation and path planning which bring useful insights about the human factors to consider in a wayfinding design. §.§ Conventional Wayfinding DesignWe give a succinct overview of the real-world wayfinding design process, which inspires our computational approach for generating wayfinding design. In architectural design, wayfinding refers to the user experience of orientation and choosing paths within a built environment. In the book The Image of the City <cit.>, Lynch defined wayfinding as the “consistent use and organization of definite sensory cues from the external environment”. Environmental psychologists later extended the definition of wayfinding to include also the use of signage and other graphical and visual clues that aid orientation and navigation in built environments <cit.>.The process of wayfinding involves four major steps <cit.>: orientation, path decision, path monitoring and destination recognition. Orientation refers to determining one's current location. Path decision refers to selecting paths to navigate to the destination. Path monitoring refers to continuously verifying that the path indeed leads to the destination. Finally, destination recognition refers to confirming that the destination has been reached. Our goal in this work is to automatically generate a wayfinding design for a given environment to facilitate the above wayfinding steps.Today almost all public spaces and private premises require a wayfinding scheme <cit.> to ensure that they are universally accessible for all users <cit.>. To achieve this goal, after a layout is designed by architects, a wayfinding design team <cit.> will decide about the wayfinding signs to put in the environment. In current practice, the design team manually creates a wayfinding scheme following a “Common Sense Approach” mainly based on experience <cit.>. Given a new premise such as a train station, a wayfinding scheme is designed following these major steps: * Identifying Major Paths: The design team first identifiesthe major paths likely to be taken by pedestrians, by experience orsurveys with the property managers. The team examines the site'sfloor plan or make an on-site inspection to estimate peopleflows <cit.>. The goal is to gain a comprehensive understandingof the site.* Devising a Wayfinding Scheme: Considering all the major paths, the design team determines the types and locations of the wayfinding signs, which should be placed at an appropriate height and angle clearly visible to pedestrians. Additional signs should be placed to eliminate any possible confusion caused by the architecture itself. As an example, Figure <ref> shows the circulation analysis and a wayfinding scheme manually created for a concert hall.* Designing, Fabricating and Placing Signs: After devising the sign placement, the designers design the appearance of the signs to be manufactured and placed in the real environment.* Evaluation, Maintenance and Update: The team maintains the wayfinding signs in a database and reviews the sign placement periodically to replace any outdated signs. Interested readers may refer to the literature <cit.> for more detail of the design process. Similar to the real-world wayfinding design process, our computational approach focuses on automatically identifying locations for placing signs in an environment according to the designer-specified navigation goals of the pedestrians.§.§ Wayfinding Design for Virtual Environments Wayfinding aids are crucial in virtual environments because they help users form cognitive maps, maintain a sense of position and direction of travel, and find their ways to their destinations <cit.>. Common wayfinding aids in virtual environments include signs, maps, landmarks, light, and paths <cit.>. In designing a highly immersive and steerable virtual environment, it is important for level designers to make use of wayfinding aids effectively to enhance spatial understanding of the environment so that users can comprehend and operate smoothly <cit.>. This principle also applies to game level design. In his book, game designer Michael Salmond emphasizes the use of a road sign system in games as an important wayfinding tool to provide players with a highly immersive navigation experience <cit.>. fig:gameroadsign shows some example road signs used in the popular video games Fallout 4 and the Elder Scrolls IV: Oblivion.In current practice, wayfinding aids are manually added to a virtual environment by level designers and then empirically tested for effectiveness, which depends on the quantity and quality of wayfinding aids provided to users, yet research found that it could be overwhelming to users if exposed to too many wayfinding aids <cit.>. Darken and Sibert conducted an important study <cit.> about the wayfinding strategies and behaviors of human users in large virtual worlds. Their experiments verified that human wayfinding strategies and behaviors in large virtual worlds are strongly influenced by environmental cues. Their experiments asserted that humans generally adopt physical world wayfinding strategies in large virtual worlds, hence common wayfinding aids in the physical world can be effectively applied to facilitate wayfinding in virtual worlds. Based on the insights, Cliburn and Rilea <cit.> conducted a further study to compare human performance in searching for an object in a virtual environment with no aid present, with maps and with signs. The results show that subjects who navigated the virtual environment with the aid of signs achieve superior performance than under other conditions. These findings motivate us to investigate the automatic propagation of directional signs in virtual environments to enhance wayfinding.Wayfinding Map Generation. In computer graphics, there are interesting approaches for automatically generating tourist brochures <cit.> and destination maps <cit.>. Though these maps are intended for real-world navigation use, they could potentially be used to assist navigation in virtual environments. Given a map and some desired destinations, these approaches select a subset of roads to reach the destinations, and visualize the important routing instructions on a generated map which is intuitive to use. Our approach is inspired by these approaches, but focuses on optimizing the placement of wayfinding signs in the layout so as to guide pedestrians to reach their destinations easily. Combining automatically generated maps with the wayfinding signs generated by our approach can potentially provide users with effective wayfinding aids to navigate smoothly in virtual environments.§.§ Perception, Path Planning and NavigationOur wayfinding design approach is also inspired by how humans perceive and navigate in everyday environments.Perception. In everyday environments, humans continually shift their gaze to retrieve wayfinding cues for making navigation decisions <cit.>. Human visual attention is known to be attracted by low-level features such as changes in color, intensity, orientation and contrast <cit.>, and by high-level scene context <cit.>. Some particular categories of objects, such as signs and texts <cit.>, are known to strongly attract eye fixations regardless of their low-level visual saliency. Therefore, we focus on optimizing the placement of wayfinding signs in our approach.Path Planning and Navigation. Given a layout, there are usually multiple paths a pedestrian can take to navigate from a starting point to a destination. For instance, suppose a hiker wants to walk from the bottom to the top of a hill. He may walk a path which is mostly straight, or a shortcut with sharp turns. A common strategy for path planning is to design a cost function to evaluate each path, and then search for a path that corresponds to a low cost <cit.>. For a low-dimensional configuration space, a grid-based search such as A* <cit.> or D* <cit.> can be applied to find an optimal path. For a high-dimensional configuration space, sampling-based approaches <cit.> are commonly applied to find an optimal or near-optimal solution.For path planning, common navigation factors to consider in the cost function involve: 1) path length (one wants to choose a short path to reduce the travel time needed to reach the destination); 2) number of turns (one wants to minimize the number of turns to reduce the complexity of the route <cit.>); 3) number of decision points (each intersection is a decision point where the pedestrian will need to decide which road to follow next; one wants to minimize the number of decision points to reduce the chance of making mistakes). Arthur and Passini <cit.> noted that the number of decision points has an important influence on the difficulty of performing wayfinding. Casakin et al. conducted empirical studies <cit.> which further verify these observations. We consider these criteria in the generation of our wayfinding designs, after which we will place the wayfinding signs and refine the placement based on agent's properties. Moreover, the designer can control the importance of each criterion by adjusting its associated weight. Our approach will generate a wayfinding design accordingly.Navigation Mistakes. Humans occasionally make mistakes in navigation. For example, it is common for pedestrians to miss a sign due to occlusion by other pedestrians; distractions such as advertisements or events happening in the environment <cit.>; or a wrongly recognized sign or landmark <cit.>. It is also common for pedestrians to make wrong turns in navigation <cit.>. A well-thought-through wayfinding design should tolerate these kinds of human mistakes <cit.>—a pedestrian should still be able to reach to his destination even if he makes mistakes occasionally.Agent-based Evaluation. Martin Raubal <cit.> used agent-based simulation to evaluate human wayfinding in unfamiliar environments, yet the simulation used does not consider the mistakes that can be made by the agents; further, it is unsure how such evaluations can be used to enable automatic sign placement. In contrast, our agent-based simulations consider navigation mistakes and we also show how such simulations can be used to create a robust wayfinding design. Our approach is also motivated by autonomous agents <cit.> and crowd simulations <cit.>. However, instead of generating realistic agent simulations, we focus on applying agent-based simulations for optimizing wayfinding designs.To achieve a robust wayfinding design, our approach conducts agent-based simulations in placing the signs to evaluate how well the design can tolerate occasional mistakes made by agents. Using our approach, the designer can control how robust the generated wayfinding design needs to be by changing the agent parameters. For example, in creating the wayfinding design for a subway station where the pedestrians (many of whom are first-time visitors) are generally expected to be unfamiliar with the environment, the designer can adjust the agents to have a higher chance in making mistakes. Our approach will generate a more robust wayfinding design by placing signs in important locations so that pedestrians can still find their ways despite the mistakes.§.§ Computational Layout DesignLayout design is an important problem in computer graphics. A layout typically consists of a number of sites connected by paths, with each site serving a different purpose. Computer-generated layouts can be used for creating virtual environments where virtual agents and human users can navigate for simulation and entertainment purposes. Galin et al. proposed to generate roads procedurally given a natural landscape with river and hills <cit.>. Computationally generated layouts can also be used for architectural design <cit.> and urban planning <cit.>. Refer to the survey <cit.> by Smelik et al. for a comprehensive review of the state-of-the-art procedural modeling techniques for generating layout designs for virtual environments.An important consideration in designing a layout is the navigation experience of the pedestrians. Recently, Feng et al. proposed an approach <cit.> which uses crowd simulation to generate mid-scale layouts optimal with respect to human navigation properties such as mobility, accessibility and coziness. However, concerning navigation, one important consideration is missing: the wayfinding experience of the pedestrians in the generated environments. We argue that their generated layouts are navigation-aware only if wayfinding signs are properly placed in the layouts.In this regard, we consider our automatic wayfinding design approach as complementary to automatic layout design or road network generation approaches. The wayfinding signs automatically generated by our approach can enhance the navigation experience of users in virtual environments, as we show in our experiments.§ OVERVIEW Figure <ref> shows an overview of our approach. We use a layout called City as our illustrative example to describe our approach. Our approach works on a graph representing an input layout. It consists of two major steps: Wayfinding Scheme Optimization and Agent-based Sign Refinement. In the Wayfinding Scheme Optimization step, our approach determines the paths for pedestrians to walk from the starting points to the destinations under different navigation scenarios specified by the user. Different human-centered navigation criteria such as turning angles and walking distances are jointly considered through an optimization to determine the paths to take. In the Agent-based Sign Refinement step, our approach places wayfinding signs strategically at appropriate locations along the paths. By using agent-based simulations to evaluate sign placement, our approach takes into account different human properties such as visibility and the possibility of making navigation mistakes. Depending on the requirements of the navigation scenarios, the designer can easily generate a wayfinding design that satisfies the domain-specific requirements, by changing the weights of different criteria in the wayfinding scheme optimization and the parameters of the agent-based simulations.§ PROBLEM FORMULATION §.§ Representation Graph Construction. To apply our approach, the user firstcreates a graph G={V, E} to represent the input layout, where V is the set of nodes representing the intersections, entrances and points-of-interest (POIs), and E is the set of edges representing the connecting paths between adjacent nodes. The creation process is simple and is similar to specifying a waypoint system in typical game level design. The user places nodes at the intersections, entrances and POIs of the layout. For example, in the illustrative example, City, the POIs include the school, the post office and so forth. The user also adds an edge between two adjacent nodes if the places represented by the nodes are connected by a road. Source-Destination Pairs. A source-destination pair encodes a navigation scenario to be considered by our approach, , going from a bus stop to a restaurant, akin to an input pair a wayfinding designer creates to specify a navigation scenario in conventional wayfinding design <cit.>. Each pair z_i = (s_i,d_i) consists of a source (starting point) s_i and a destination d_i.To facilitate the creation of source-destination pairs, by default our approach automatically generates a source-destination pair between every node representing an entrance and every node representing a POI, with the former being the source and the latter being the destination. Additionally, the user can specify any extra pair if needed. For instance, in the City example, he may want to create a pair connecting the hotel and the restaurant. Importance Values. We also allow the user to assign an importance value κ_i ∈ [0,1] to each source-destination pair. For instance, in the City example, the (Hotel, Restaurant) pair can be given a higher importance value if many pedestrians are expected to walk from the Hotel to the Restaurant, whereas the (School, Restaurant) pair can be given a lower importance value if fewer pedestrians are expected to walk from the School to the Restaurant. In the optimization, the path connecting the Hotel with the Restaurant should be given a higher priority, compared to the path connecting the School to the Restaurant. If a trade-off exists, it is important to make sure that pedestrians can walk conveniently from the Hotel to the Restaurant, while it may not matter as much for pedestrians to walk a somewhat inconvenient path from the School to the Restaurant.§ WAYFINDING SCHEME OPTIMIZATIONGiven a source-destination pair z_i=(s_i, d_i), there could exist multiple possible paths from s_i to d_i. Let P_z_i denote the set of all such paths. Our goal in this step is to generate a wayfinding scheme that takes all source-destination pairs {z_i} into account and selects a path for each pair. In other words, we select a path p_i ∈P_z_i for each pair z_i, such that the set of all selected paths P={ p_i } satisfies some local and global criteria defined by our cost terms. We formulate our problem as an optimization of a total cost function: C^P_all(P) = w^L_local C^L_local+ w^N_local C^N_local+ w^A_local C^A_local+w^L_global C^L_global+ w^N_global C^N_global The total cost function C^P_all(P) refers to a weighted sum of cost terms encoding the length, number of decision points and the amount of turns of each path, as well as the length and number of decision points of the overall wayfinding scheme. The user can adjust the importance of different design criteria by changing the weights of the corresponding cost terms, to accommodate the domain-specific needs of the layout for which the wayfinding scheme is designed. We describe each cost term in detail as follows.§.§ Wayfinding Cost TermsLocal Path Length. In general, pedestrians prefer to walk a short distance <cit.>. Hence, for each source-destination pair, a shorter path is preferred. We define a cost to penalize the length of the selected path of each source-destination pair: C^L_local(P) = 1/|P| L_E∑_p ∈ Pκ_p L(p), where |P| L_E is the normalization factor with |P| being the number of source-destination pairs and L_E being the total length of all edges in graph G. L(p) returns the length of path p. κ_p ∈ [0,1] is the importance value assigned to the source-destination pair that path p belongs to.Local Path Node. The nodes in our formulation correspond to decision points in the wayfinding literature <cit.>. Decision points are locations where pedestrians need to make a decision about which direction to go, such as an intersection between paths (, a lobby in a subway station); or where pedestrians need to confirm the identity of the current location, such as a place of interest (, a platform in a subway station). Directional or identification signs need to be placed at decision points to guide pedestrians to find their directions <cit.> or identify their current locations. Paths with lots of decision points should be avoided <cit.> as making each navigation decision induces stresses to the pedestrians for the fear of making a wrong decision that may lead to a wrong place <cit.>. Therefore we define a cost to penalize the number of decision points of each path: C^N_local(P) = 1/|P| |V|∑_p ∈ Pκ_p N(p), where |P| |V| is the normalization factor with |P| being the number of source-destination pairs and |V| being the total number of nodes in graph G. N(p) returns the total number of nodes along path p.Local Path Angle. Research in spatial orientation <cit.> suggests that paths with varying orientation tend to confuse pedestrians in wayfinding, causing disorientation, anxiety and discomfort <cit.>. A wayfinding scheme composed of straight paths is more intuitive for navigation <cit.>. We therefore include a cost term to penalize the selection of paths with varying orientation: C^A_local(P) = 1/|P| |V| π∑_p ∈ Pκ_p A(p), where |P| |V| π is the normalization factor with |P| being the number of source-destination pairs and |V| being the total number of nodes in graph G. The maximum absolute turning angle between two adjacent edges is π. A(p) returns the sum of absolute turning angles between all adjacent edges along path p.Global Path Length. Our approach encourages paths to overlap with each other so as to minimize the total length of roads (edges) that are part of a path. This property could be useful from the management's perspective <cit.>, because by directing the flow of human movement to fewer roads, fewer roads will need to be maintained, patrolled and lightened up. We define a cost to encourage overlapping paths accordingly: C^L_global(P) = L(P)/L_E, where L_E is the total length of all edges in graph G as the normalization factor. L(P) returns the total length of the edges that belong to any path in P.Global Path Node. Our approach also encourages different paths to share nodes. Similar designs can be observed in the wayfinding schemes of different real-world premises, such as subway stations, shopping malls and concert halls, where people are directed to a lobby or an information desk that can lead to multiple destinations (see Figure <ref> for an example). From the management's perspective, it could be easier to maintain signs centralized at certain locations in the environment <cit.>. Also, centralizing signs could save space, which could be reserved for other better uses <cit.>. We define a cost to encourage node sharing accordingly: C^N_global(P) = N(P)/|V|, where |V| is the total number of nodes in graph G as the normalization factor. N(P) returns the total number of nodes that belong to any path in P. §.§ Optimization For each source-destination pair z_i, there exist a lot of possible paths going from the source to the destination. For instance, pair (Bus Stop, School) in the illustrative example (Figure <ref>) has more than 1,000 possible paths. Given the many combinations of possible paths of all pairs, the solution space could be huge as it grows exponentially with the number of pairs being considered.To reduce the search space for a solution, we devise a sampling-based, stochastic search algorithm to solve the optimization problem as follows. For each pair, we only consider the first loopless k shortest paths, which can be found by Yen's algorithm <cit.> in O(|E|+|V|log(|V|)) time using a Fibonacci heap, where |E| is the number of edges and |V| is the number of nodes. Deviation algorithms <cit.> and alternative implementations <cit.> exist that could further enhance computational efficiency, yet we adopt the classical implementation for simplicity.Given the k shortest paths for each source-destination pair, we find a combination of paths of all source-destination pairs which corresponds to a low cost value. Even though we reduce the size of the solution space this way, an exhaustive search for the global optimum would still require heavy computation exponential to the number of pairs being considered. Instead, our approach finds a local optimum as an approximate solution. We apply the simulated annealing technique <cit.> with a Metropolis Hasting <cit.> state-searching step to explore the complex optimization landscape. The optimization proceeds iteratively. In each iteration, the current solution P is altered by a proposed move to another solution P', which may or may not be accepted depending on the acceptance probability of the proposed solution. More specifically, the acceptance probability is calculated by the Metropolis criterion: (P' | P) = min (1, e^1/T (C^P_all(P) - C^P_all(P'))), where T is the temperature of the annealing process. T is high at the beginning of the optimization, allowing the optimizer to explore the solution space more aggressively; T is low towards the end of the optimization, allowing the optimizer to refine the solution. Essentially, the optimizer accepts any solution with a lower cost, while it accepts a solution with a higher cost at a probability: the higher the cost, the lower the acceptance probability. The optimization terminates if the absolute change in cost is less than 1% over 1,000 iterations.Figure <ref> shows the wayfinding schemes generated over the iterations of the optimization process of the illustrative example. Figure <ref> shows the decay in cost over the optimization process. We also experimented with changing the importance values of the source-destination pairs; the resulting wayfinding schemes are depicted in Figure <ref>. Proposed Moves. Our proposed moves follow a simple design. Depending on the number of source-destination pairs |P|, our optimizer changes the selected paths of up to |P| source-destination pairs in a single move. The probability _x of drawing a move to change the selected paths of x pairs is inversely proportional to x, , _x = |P|-x+1/X, where X=∑^|P|_i=1i. A selected path is randomly changed to another path from the set of k shortest paths of its corresponding source-destination pair. Parameter Settings. In our experiments, we initialize P by randomly selecting a path from one of the k shortest paths for each source-destination pair. By default we adaptively set k such that the length of the k-th shortest path is just within 16% of the length of the first shortest path, as research in spatial cognition finds that humans typically choose a path with a length within 16% of that of the shortest path <cit.>. Unless otherwise specified, each pair is assigned the same importance value κ_i=1/|P|, and we empirically set the weights w^L_local and w^N_local to 1, w^L_global and w^N_global to 5, and w^A_local to 10. These parameters and weights can be adjusted via the interface of our tool according to domain-specific design needs—a flexibility provided by our optimization-based design framework.§ AGENT-BASED SIGN REFINEMENT The wayfinding scheme optimization in the previous step produces a wayfinding scheme which comprises paths from the sources to the destinations. In this section, we discuss how our approach automatically places signs for each path to facilitate wayfinding.§.§ Overview [14]r0.22< g r a p h i c s >Example sign.Each node along a path corresponds to a decision point where a sign may be placed <cit.>. In our experiment design, a sign shows an arrow pointing to the next node and the destination's name or symbol. Two or more signs placed at the same node are combined into a single sign showing multiple pieces of wayfinding information. Figure <ref> shows an example sign placed at a street corner in the City scene.A trivial yet unrealistic solution is to place a sign showing the direction to the destination at every node along a path, so that a pedestrian walking along the path will keep reassured that he is heading to his destination. However, this solution will involve placing many redundant signs occupying a lot of space, and is generally not adopted.Instead, our approach places signs at strategic locations according to human vision and navigation properties which are evaluated via agent-based simulations. The sign placement process is performed as an optimization against a number of cost terms reflecting the quality of the wayfinding experience brought about by the signs. The optimization starts with the trivial solution of placing a sign at each node along a path, then iteratively alters the sign placement to optimize the costs.One advantage of this approach compared to the alternative approach of iteratively adding signs from scratch is that the optimization process of this approach is much more tractable, because the initial solution and each intermediate solution represent a feasible wayfinding solution even though they may contain redundant signs and the sign distribution may not be ideal. As we experienced in our experiments, this approach allows the optimizer to progress stably and conservatively to achieve a refined sign placement solution effectively.§.§ RepresentationA sign placement solution refers to placing signs at certain nodes of the input layout. Given the path of each source-destination pair (computed from Section <ref>), a good sign placement solution guides each pedestrian to walk from the source to the destination through the path effectively. Note that there could exist multiple reasonable sign placement solutions. The goal of this step is to locate one of such solutions through an optimization.In case a road connecting two adjacent nodes is long, we may want to place signs at some intermediate locations along the road to reassure the pedestrian about his walking direction. Therefore, for roads longer than a distance threshold d_r, our approach adds extra nodes between the two end nodes of the road in a pre-processing step, such that the distance between any two adjacent nodes is shorter than d_r. These extra nodes serve as additional potential locations for placing signs. d_r can be empirically set by the designer depending on how frequently a pedestrian should be reassured about his direction. For example, for a subway station, the designer can use a smaller d_r such that more signs will be generated along a long road to reassure pedestrians that they are walking towards a desired destination (, a platform). For our illustration example, City, we set d_r to be 50 meter.More specifically, given the graph G={V, E} representing the input layout, we extend V to V' to include the extra nodes added. A sign placement solution is represented by S={(v_i,ϕ_i)}, where v_i ∈ V' is the node at which sign ϕ_i is placed. ϕ_i contains the sign's attributes such as its arrow direction and the name of the destination it is referring to. Our optimization searches for a desirable sign placement solution S^* by minimizing a total cost function C^S_all(S): C^S_all(S) = w^N_sign C^N_sign + w^D_sign C^D_sign + w^F_sign C^F_sign,where C^N_sign and C^D_sign are regularization costs; C^F_sign is the agent-based simulation cost for estimating the wayfinding failure induced by the sign placement solution S. w^N_sign, w^D_sign and w^F_sign are the weights of the cost terms, which are respectively set as 1, 1 and 10 by default. §.§ Sign Placement Cost TermsNumber Of Signs. We include a cost term to regularize the number of signs in the sign placement solution, to penalize the existence of redundant signs: C^N_sign(S) = N(S)/|V'|, where N(S) is the number of placed signs; |V'| being the total number of nodes (, potential locations for placing signs) is a normalization constant.Distribution of Signs. In a real world design, signs are often evenly distributed along a path, which serve the purpose of regularly reassuring a pedestrian about his direction towards the destination. Accordingly, we include a cost term to regularize the distribution of signs: C^D_sign(S) = 1/|P|∑_p ∈ Pσ(p)/L(p), where |P| is the number of source-destination pairs; σ(p) is the standard deviation of the distances between any two adjacency signs on path p, and L(p) is the length of path p. Wayfinding Failure. The placed signs should effectively guide the pedestrians from the sources to the destinations. We include a cost term to penalize wayfinding failure:C^F_sign(S) = {[ F(S) ifF(S) ≤μ,; +∞otherwise , ]. where F(S) is the percentage of agents who fail to reach their destinations under the current sign placement S. F(S) is obtained by performing an agent-based simulation with sign placement S. μ is a failure tolerance level specified by the designer, which is set as 20% by default.§.§ Agent-based EvaluationIn each iteration of the optimization, we employ an agent-based simulation to evaluate the wayfinding experience under the current sign placement S, to obtain F(S) used for computing the wayfinding failure cost. Agent Model. Each agent mimics a pedestrian walking from a source to a destination. We model each agent with wayfinding behavior according to Montello and Sas <cit.>. The agent starts from the source. It can see any unoccluded sign within visible distance d_v. Whenever it sees a sign pointing to its destination, it will follow the sign to choose a direction to walk. If it arrives at an intersection but is unsure about which road to take out of several roads connected to that intersection, it will randomly choose a road to walk with equal probability. To more realistically model mistakes that humans can make throughout a navigation, each agent has a probability _miss of missing a sign even within sight.Simulation. For each source-destination pair, 100 agents are employed to walk from the source to the destination using the agent model described. At the end of the simulation, we count the number of agents that can successfully reach their destinations, and hence compute F(S).A “success” is defined as follows: let d_b be the “baseline” walking distance from the source to the destination if no mistake is made (, _miss=0) under full sign placement (, a sign is placed at every node along a path). If an agent, given the chances of making mistakes and under the current sign placement S, can walk from the source to the destination by a distance no longer than λ d_b, the navigation is considered as a success. The navigation is counted as a failure otherwise. We use λ = 1.5 in our experiments. §.§ Sign Refinement by OptimizationInitialization. Our optimization is initialized with the full sign placement solution, , a sign is placed at every node along the path from the source to the destination of each source-destination pair. Although this sign placement can lead the pedestrians to their destinations, it consists of a lot of redundant signs that could be removed without affecting the pedestrians' ability to find their ways. We apply a stochastic, agent-based optimization to search for a reasonable sign placement solution.Iterative Refinement. Our optimization proceeds iteratively. At each iteration, a move is randomly proposed to alter the sign placement solution whose quality is evaluated using the total cost function C^S_all(S). The moves include:* Adding 1 or 2 signs to 1 or 2 source-destination pairs.* Removing 1 or 2 signs from 1 or 2 pairs.* Moving a sign from one node to another node of a source-destination pair. The proposed solution is accepted with an acceptance probability determined by the Metropolis criterion as described by Equation <ref>, using C^S_all(S) as the cost function. The optimization terminates if the absolute change in cost is less than 1% over 50 iterations.Figure <ref> shows the sign placement over iterations for the illustrative example. In this example, the source-destination pairs include walking from the Bus Stop to each POI, and walking between every pair of POIs. Each iteration of the optimization takes about 0.01 second to finish in our experiments. It takes about 1,000 iterations (about 10 seconds) to finish the sign placement optimization for this example.§ EXPERIMENTS AND RESULTS We implemented our approach as a plugin for the Unity 5 game engine using C#, which level designers can use to create a wayfinding scheme of a given layout. We run our experiments using a Macintosh machine equipped with a 2.3 GHz Intel Core i7 processor and 8GB of RAM. Generating a wayfinding scheme for a layout similar to the illustrative example, City, takes about 40 seconds using our current implementation. §.§ Different LayoutsWe used our approach to generate wayfinding designs for different layouts: Amusement Park, Downtown and Penn Station. Figure <ref> shows the maps from which the layouts are extracted following the procedure in Section <ref>. Figure <ref> shows the wayfinding designs generated by our approach. We describe the details of each generation in the following. Please also refer to the supplementary material for details of the generated wayfinding schemes of the layouts, and also for the results of two more layouts: City and Canyon, which demonstrate how our approach could be applied to generate wayfinding designs for 3D virtual environments and with robustness as a key consideration.Amusement Park. We use the layout of an amusement park, Six Flags New England, as input (see Figure <ref>). The POIs, in this case, represent the popular spots the visitors would like to visit. The source-destination pairs involve all pairs of entrances and popular spots, and all pairs of popular spots. In addition to the popular spots, we expect there are street performances and stalls in the park, which might distract visitors from navigating to their destinations. To model such distractions, we set the missing chance _miss in the agent simulation of the sign placement optimization step to a relatively high level of 0.2. Besides, we assume that visitors highly prefer to walk shorter and more direct paths to their destinations if possible, therefore we use larger values of 5 for the weights w^L_local and w^N_local of the local path length and local path node costs.Figure <ref> shows the generated design. Our approach generates a path for each source-destination pair. It places road signs densely at each intersection along the paths to ensure the robustness of the wayfinding system. The left-hand side of the layout shows a shortcut generated which is part of the paths from the popular spot at the lower left to the popular spots on the right. The shortcut allows visitors to walk shorter paths to their destinations, and is also more direct for the visitors as it passes through fewer intersections (2 instead of 3) compared to the alternative path above.Downtown. This example uses the layout of Downtown Boston as input. The goal is to place road signs that guide drivers to an available parking lot nearby. The entrances refer to the major roads through which most cars enter the Downtown area. The POIs are the parking lots, which are placed at the same locations as the real parking lots found on Google map. We suppose that all the parking lots are run by the same company, hence there are signs showing the way from one parking lot to a nearby parking lot within 0.2 mile, such that if a parking lot is full the driver can follow the signs to a nearby parking lot. Accordingly, we define the source-destination pairs to connect each entrance to its nearest parking lot, as well as to connect each parking lot to its nearby parking lot. The latter type of pairs are given a relatively larger importance value (κ_p=0.8 instead of 0.5 given to other pairs), so that our system will prefer to find shorter paths passing through fewer intersections for the paths that connect one parking lot to a nearby parking lot, to help drivers to get to an alternative parking lot more easily in case a parking lot is full.Figure <ref> shows the generated design. A path is generated to connect each entrance to its nearest parking lot. Short and direct paths are also generated to connect parking lots to nearby parking lots. While there are many possible paths that can be chosen as the layout comprises a network of many streets, our approach chooses paths which are straight and consists of few turns, as the local path node cost penalizes the inclusion of intersections and the local path angle cost discourages orientation changes. Penn Station. This example uses the lower level of the Penn Station as input. In this example, the entrances refer to the gates and the stairs from the upper level. The POIs refer to the terminals. The source-destination pairs include every pair of entrance and terminal, and every pair of terminals (for modeling the situations where a passenger wants to transfer from one terminal to another terminal). As the station is expected to be crowded, the visibility d_v of the agents is set to a relatively low value of 10 meters to account for the occlusion by human crowd, and the miss chance _miss is set to a relatively high level of 0.2. Figure <ref> shows the generated wayfinding design. The road signs are placed densely and are also placed at non-intersection nodes, to counteract the higher miss chance by reassuring pedestrians about their directions. §.§ Changing Agent Parameters We further experimented with changing the parameters of the agent-based sign placement process using the City layout. In the default settings, the missing chance _miss is set to 0%, the weight w^F_sign of the wayfinding failure cost term is set to 10 and the visibility distance d_d is set to 125 meters. Figure <ref>(a) shows the resulting sign placement generated with the default parameters.We experimented with increasing the missing chance _miss to 10%. Figure <ref>(b) shows the resulting sign placement. Our system places more signs so as to increase the robustness of the wayfinding design against navigation mistakes. In addition, some signs are placed on the roads not belonging to any path for guiding the agents back to the correct paths. Next, we experimented with lowering the weight of the wayfinding failure cost term to 0.01. Figure <ref>(c) shows the resulting sign placement. Our system keeps fewer signs, because it is acceptable even if some agents make mistakes and do not walk to the destination within a desired period of walking. The pedestrians walk along some roads (shown in blue) not belonging to any path. This setting maybe useful for some situation where it is not critical for the agents to reach the destination, and when space is better preserved for other uses. For example, in a flea market, it may not be critical for the pedestrians to visit each stall as they are expected to wander around in the market.Finally, we experimented with increasing the visibility to 250 meters. Figure <ref>(d) shows the result. Our system keeps only a fewer signs because the pedestrians are capable of seeing signs at a farther distance. This setting is useful for modeling situations where the signs are big (such as those shown in billboards) and can be seen far away.§.§ Visualization Destination Accessibility. Our approach also allows the designer to visualize the accessibility of a destination under the generated wayfinding design. This is a very useful functionality that can help the designer to create a wayfinding design that guides pedestrians from different locations to walk to a destination as desired. Figure <ref>(a) depicts this functionality. The accessibility of a destination (the Post Office) is visualized as a heatmap. Agents in the blue region can travel to the Post Office successfully by following the wayfinding signs under the current wayfinding design; while those in the red region have a low chance of success.To compute the accessibility heatmap with respect to a destination specified by the designer, our system sample points at regular intervals along all the edges of the input layout (whether the edges are part of the paths of the generated wayfinding design or not). Agents are employed to walk from each sample point to the destination, in a similar fashion as in the agent-based sign placement step (Section <ref>). The rates of success are used to set the heatmap values at that sample points; the heatmap values between two sample points are interpolated.Note that a destination typically does not need to be accessible from every region, because enforcing such full accessibility will likely involve placing a lot of signs even at some “unimportant” regions. For example, it may not be important to place signs to guide pedestrians how to walk from a post office to a restaurant. By visualizing the accessibility to a destination using a heatmap, the designer can intuitively tell what regions are covered by the current wayfinding design and if any improvement is needed.Removing Blind Zones. If the designer wants to remove a “blind zone” (, a region shown in red indicating low accessibility to the destination), he can easily do so by clicking on the red region via our user interface. Our system will automatically place signs which guide pedestrians to walk from the clicked point to the path leading to the destination. Agent-based evaluations will be re-run at each sample point to update the heatmap accordingly, which takes about 1 second for the City example. Figure <ref>(b) shows an example of removing a blind zone. § EVALUATION §.§ User Study Conditions. We conducted a user study to evaluate the effectiveness of the wayfinding designs generated by our approach. Our user study was conducted in the City layout used as the illustrative example. Participants were asked to navigate from a starting point to a destination under 4 different wayfinding conditions:* No sign.* Mini-map. A mini-map that functions like a mini-map in a common first-person 3D video game is shown;* Full signs. In this case, we only run the wayfinding scheme optimization step to generate the paths for the source-destination pairs. Signs are placed at every node along each path.* Refined signs. In this case, we run the wayfinding scheme optimization step to generate the paths, and then the agent-based sign refinement step to refine the sign placement. Signs are placed strategically at some of the nodes along each path. Figure <ref> shows two screenshots of the user study tests under the mini-map and refined sign conditions. There are 2 different scenarios. In the first scenario, the participant was asked to walk from the Bus Stop to the Restaurant. In the second scenario, the participant was asked to walk from the Bus Stop to the School. Each scenario was tested by 80 participants under the 4 different wayfinding conditions (, 20 participants for each condition).Participants. In total, we recruited 160 participants through social networks. The participants are university students. All of them have experience with 3D video games and are familiar with the movement control of common first-person-shooting games, which our user study program similarly adopts. Before each test, a description of the task and the movement control is shown to the participant, and the participant is allowed to get familiar with the movement control in a warm-up session. Test Sessions. The goal of the participant in each test is to walk to the destination (Bus Stop or Restaurant) as fast as he can. To make sure he is clear about the destination, a screenshot of the destination is also shown to the participant before the user study begins. Our program records the path, the distance walked and the time taken by the participant. The test ends if the participant reaches the destination, or if the time taken exceeds the time limit, which is defined as three times the time needed to walk from the start to the destination without any stop following the path generated by the wayfinding scheme optimization step. The latter is considered as a failure case. §.§ Results and Analysis Path Taken. Figure <ref> shows the results of the user study. The paths taken by the participants are visualized in a heatmap. The roads with high usage are shown in red, and those with low usage are shown in blue. There are some interesting observations. Under the no sign condition, the participants wandered around and could barely reach the destination. Under the mini-map condition, the participants walked towards their destinations along similar directions. However, there are considerable variations among the paths taken, as can be seen from the color dispersion on the heatmaps. For example, in scenario 2, near half of the participants took the bottom path while the other half took the upper path. Under the full signs and refined signs conditions, all participants walked to the destinations following the same path.Distance Walked. Table <ref> shows the statistics of the distances walked by the participant under different conditions. For the no sign condition, only the data of the participants who could reach their destinations within the time limit is used to calculate the statistics. For the other conditions, all participants can reach the destinations and all data is used to calculate the statistics.Under the no sign condition, only 55% and 25% of the participants could reach their destinations in Scenario 1 and 2 respectively. For those who could reach the destinations, they generally needed to walk a very long way as shown by the large mean values.Under the mini-map condition, all participants could reach their destinations. In Scenario 1, the participants could reach the destination Restaurant by walking a distance similar to that in other conditions. However, the standard deviation (35.81m) is higher than the standard deviations (11.45m and 11.95m) of the other conditions, showing that there are larger variations in performances, due to different paths chosen as shown in Figure <ref>. In Scenario 2, the relative difference in standard deviation is even more pronounced (55.27m under the mini-map condition, versus 6.11m and 7.38m under the other conditions), due to the larger differences in walking distances of the paths chosen. In average, the participants walked a shorter distance to reach the destination under the full signs or refined signs conditions (471.37m and 476.32m) than under the mini-map condition (503.32m).Under the full signs and refined signs conditions, all participants can reach their destinations. The means and standard deviations of the walked distances are similar. This shows that the refined sign placement is as effective as the full sign placement in guiding the participants to their destinations. However the refined sign placement uses significantly fewer signs (3 signs under refined sign placement versus 8 signs under full sign placement in Scenario 1; and 3 signs under refined sign placement versus 7 signs under full sign placement in Scenario 2). Please refer to our supplemental material for the user study results and a video showing example sessions.§ SUMMARY We verify in our experiments that our approach can be applied to automatically generate wayfinding designs for a variety of layouts, and that the designs can be used by human users to navigate to their destinations effectively in virtual worlds. Compared to the conventional approach of creating wayfinding designs manually, the novelty of our approach lies in formulating the problem as an optimization, which can be solved automatically and efficiently, hence overcoming the design challenge posed by the considerations of multiple paths and design criteria. Our optimization approach also allows the flexibility of considering additional constraints in wayfinding design and the designer can trade off between different criteria by controlling their corresponding weights. We adopt an agent-based approach to automatically place signs at strategic locations, considering human perception and navigation properties such as eyesight and the possibilities of making mistakes. The agent model makes it intuitive and flexible for designers to define agent properties and behaviors according to the specific requirements of their design projects on hand; signs will be automatically placed according to the specified agent properties.§.§ Limitations and Future WorkOur approach only focuses on placing textual and arrow signs to facilitate wayfinding. While these are common wayfinding aids, in reality humans also make use of other wayfinding aids and cues such as maps (, “You-are-here” maps <cit.>), landmarks and flow of people movement to determine directions. In future extension it would be useful to consider all these alternative aids and cues in generating a wayfinding design.Our agent-based simulation model only focuses on a few properties that are relevant to wayfinding. More realistic virtual humans comprising of cognitive, perceptive, behavioral and kinematic modules, similar to the autonomous agents used for artificial life simulation <cit.>, could be used to replace our agents. The perceptual data obtained from the simulations based on such agents could be used for more sophisticated wayfinding analysis to enhance the computationally-generated wayfinding design.In our current approach, for simplicity we only consider one path for each source-destination pair. In fact, there could exist multiple paths (secondary paths) for each pair. This can be modeled by extending our framework to allow multiple paths for each pair, which will be considered jointly in the optimization. In our approach, the source-destination pairs are manually specified rather than automatically generated. This is because our approach does not infer the layout context. An interesting future direction is to devise a data-driven approach to automatically identify the possible locations of interests given a layout based on prior statistics of human flows, and hence automatically suggest the source-destination pairs to consider. For example, given a subway station, a data-driven approach may automatically suggest that (Entrance, Ticket Machine) and (Ticket Machine, Gate) as likely source-destination pairs, based on the real-world statistics of human flows in subway stations. §.§ AcknowledgementsWe thank Ana Aravena for narrating the demonstration video. This research is supported by the UMass Boston StartUp Grant P20150000029280 and by the Joseph P. Healey Research Grant Program provided by the Office of the Vice Provost for Research and Strategic Initiatives & Dean of Graduate Studies of UMass Boston. This research is also supported by the National Science Foundation under award number 1565978. We acknowledge NVIDIA Corporation for graphics card donation. IEEEtran[ < g r a p h i c s > ]Haikun Huang is a PhD student at the University of MassachusettsBoston. He received his BSc degree in computer science from theUniversity of Massachusetts Boston in 2016. His research interestsinclude computer graphics and visualization. He is a member of theIEEE. [ < g r a p h i c s > ]Ni-Ching (Monica) Lin is a graduate student in the Institute ofElectrical and Control Engineering, National Chiao Tung University,Taiwan. She received her BEng degrees with honors in ElectricalEngineering from Tamkang University, Taiwan. Her research interestsinclude robotic vision, 3D visualization, and robot navigation. [ < g r a p h i c s > ]Lorenzo Barrett is an undergraduate student in computer science atthe University of Massachusetts Boston. His research interestsinclude visualization and cyber security. [ < g r a p h i c s > ]Darian Springer is an undergraduate student in computer science atthe University of Massachusetts Boston. His research interestsinclude computer graphics and visualization. [ < g r a p h i c s > ]Hsueh-Cheng (Nick) Wang is an assistant professor in the Electricaland Computer Engineering and Institute of Electrical and ControlEngineering at National Chiao Tung University, Taiwan. Dr. Wang andhis research group focus on developing robotic systems to solvereal-world problems in direct support of individuals. [ < g r a p h i c s > ] Marc Pomplun is a professor of computer science at the University of Massachusetts at Boston and the Director of the Visual Attention Laboratory. His work focuses on analysing, modelling and simulating aspects of human vision. [ < g r a p h i c s > ]Lap-Fai (Craig) Yu is an assistant professor at the University ofMassachusetts Boston, where he directs the Graphics and VirtualEnvironment Laboratory. He received his BEng and MPhil degrees incomputer science from the Hong Kong University of Science andTechnology (HKUST) in 2007 and 2009 respectively, and his PhD degreein computer science from the University of California, Los Angeles,in 2013. He was a visiting scholar at Stanford University and avisiting scientist at the Massachusetts Institute of Technology. Hisresearch interests include computer graphics and computer vision. Heserved in the program committee of Pacific Graphics 2016 and 2017,and the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games(i3D) 2016. He is a member of the IEEE.
http://arxiv.org/abs/1706.08891v1
{ "authors": [ "Haikun Huang", "Ni-Ching Lin", "Lorenzo Barrett", "Darian Springer", "Hsueh-Cheng Wang", "Marc Pomplun", "Lap-Fai Yu" ], "categories": [ "cs.GR" ], "primary_category": "cs.GR", "published": "20170627144008", "title": "Way to Go! Automatic Optimization of Wayfinding Design" }
^1College of Physics and Engineering, Qufu Normal University, Qufu 273165, China;^2School of Astronomy and Space Science, Nanjing University, Nanjing 210093, China; [email protected] ^3GXU-NAOC Center for Astrophysics and Space Sciences, Department of Physics, Guangxi University, Nanning 530004; ^4Key laboratory of Modern Astronomy and Astrophysics (Nanjing University), Nanjing 210093, China. We statistically study gamma-ray burst (GRB) optical flares from the Swift/UVOT catalog. We compile 119 optical flares, including 77 flares with redshift measurements. Some tight correlations among the time scales of optical flares are found. For example, the rise time is correlated with the decay time, and the duration time is correlated with the peak time of optical flares. These two tight correlations indicate that longer rise times are associated with longer decay times of optical flares, and also suggest that broader optical flares peak at later times, which are consistent with the corresponding correlations of X-ray flares. We also study the frequency distributions of optical flare parameters, including the duration time, rise time, decay time, peak time and waiting time. Similar power-law distributions for optical and X-ray flares are found. Our statistic results imply that GRB optical flares and X-ray flares may share the similar physical origin and both of them are possibly related to central engine activities.§ INTRODUCTION Gamma-ray bursts (GRBs) are the most luminous phenomena occurring at cosmological distances. It is well known that the prompt gamma-ray emission is produced by internal dissipation processes within the relativistic ejecta (Piran 2004; Mészáros 2006; Zhang 2007; Kumar & Zhang 2015), while the broadband afterglows are usually interpreted as the interaction of an ultra-relativistic ejecta with the ambient medium (Mészáros & Rees 1997; Sari 1998). The successful launch of the Swift satellite in 2004 (Gehrels et al. 2004) has greatly improved our understanding of GRB physics. Since its rapid response, Swift could quickly allow the X-Ray Telescope (XRT) and the Ultraviolet/Optical Telescope (UVOT) to localize a GRB position and begin to observe the afterglow (Burrows et al. 2005a; Roming et al. 2005).Some new phenomena are also discovered in the Swift satellite era, while the most intriguing phenomenon is the erratic flares of the “canonical” X-ray light curve, observed in the early X-ray afterglow phase (Burrows et al. 2005b; Zhang et al. 2006; Nousek et al. 2006). Those erratic X-ray flares usually happen at ∼ 10^2-10^5 s after the prompt emission (Falcone et al. 2007; Chincarini et al. 2007, 2010; Swenson & Roming 2014), and are observed in both long and short GRBs (Romano et al. 2006; Falcone et al. 2006; Campana et al. 2006; Margutti et al. 2011). Since flares appear to come from a distinct emission mechanism than the underlying afterglow emission, and are seen in both long and short GRBs, it is generally supposed to be powered by the central engine activities. Therefore, X-ray flares and the prompt gamma-ray emission may have the similar physical origins (Burrows et al. 2005b; Fan & Wei 2005a; Falcone et al. 2006, 2007; Zhang et al. 2006; Nousek et al. 2006; Wu et al. 2006; Chincarini et al. 2007, 2010; Abdo et al. 2011; Troja et al. 2015; Yi et al. 2015; Mu et al. 2016a). Interestingly, flares also appear in the UV/optical band. Li et al. (2012) selected a group of optical light curves with flares, and suggested that optical flares are also related to the erratic behavior of the central engine, which are similar to X-ray flares. Flares are both observed in the X-ray as well as the UV/optical bands, but the number of GRBs with optical flares is much smaller than that of GRBs with X-ray flares.Flares are common astrophysical phenomena throughout the universe. Some studies on X-ray flares from astrophysical systems have been carried out (Wang et al. 2015). Wang & Dai (2013) selected 83 GRB X-ray flares and 11595 solar X-ray flares, and performed a statistical comparison between them. They found the energy, duration, and waiting-time distributions of GRB X-ray flares are similar to those of solar flares, which suggest a similar physical origin of the two kinds of flares. Some works using different methods and data also obtain a similar result (Aschwanden 2011; Wang et al. 2015; Harko et al. 2015; Guidorzi et al. 2015). These results are supported by Yi et al. (2016), who studied all significant X-ray flares from GRBs observed by Swift until March 2015, and obtained 468 bright X-ray flares, including 200 flares with redshifts. They found that there are four power-law distributions with different indices between X-ray flares and solar flares, including power-law distributions of energies, durations, peak fluxes and waiting times. Besides, they also studied the peak times, rising times and decay times of GRB X-ray flares, and found all of them show the power-law distributions. These similar statistical distributions between solar flares and GRB X-ray flares suggest both of them could be produced by magnetic reconnection, and also could be explained by a fractal diffusive, self-organized criticality model (Aschwanden 2011; Wang & Dai 2013; Harko et al. 2015; Dănilă et al. 2015; Yi et al. 2016). Interestingly, some theoretical models have been proposed that GRB X-ray flares could be powered by magnetic reconnection events (Giannios 2006; Dai et al. 2006; Zhang & Yan 2011; Mu et al. 2016b).In this paper, we investigate the optical flares observed by Swift/UVOT and study the distributions of optical flare parameters, such as duration times, rise times, decay times, peak times and waiting times. Since optical flares and X-ray flares may have a common physical origin, both of them may have similar distributions of the parameters. This paper is organized as follows. In Section 2, we present the selected GRB sample. In section 3, we study some correlations between parameters of optical flares. The distributions of flare parameters are discussed in Section 4. Discussion is given in Section 5. Section 6 presents conclusions. A concordance cosmology with parameters H_0 = 71 km s ^-1 Mpc^-1, Ω_M=0.30, and Ω_Λ=0.70 is adopted in all part of this work. § DATAWe extensively search for the optical flares of GRBs. Since the fraction of GRBs with optical flares is much smaller than that of X-ray flares, we mainly focus on GRBs detected by Swift/UVOT. Swenson et al. (2013) carefully studied the second UVOT GRB afterglow catalog, which provides a complete data set of fitted UVOT light curves for both long and short GRBs observed by Swift from 2005 April through 2010 December (Roming et al. 2009). They found more than one hundred unique potential flares in 68 different optical light curves, and obtained the starting time, peak time and end time of optical flares. We consider the full sample containing 119 optical flares (see their Table 2), including 77 flares with redshifts. These optical flares usually contain a complete structure, including remarkable rising and decaying phase. Figure 1 shows an sample of the optical flares. Most GRBs have a single optical flare, but some of them have several optical flares.We carefully study the timescales of optical flares, such as waiting time, duration time, peak time, rise time and decay time. The time parameters of optical flares are derived as follows, which also can be seen in Yi et al. (2016). The rise time can be derived by T_rise=T_peak-T_start, the decay time T_decay=T_stop-T_peak and the duration time T_Duration=T_stop-T_start, where T_start, T_peak and T_stop are the starting time, peak time and end time of flares, respectively. They are all listed in Table 2 of Swenson et al. (2013). The waiting time for one flare is defined as T_waiting=T_start,i+1-T_start,i, where T_start,i+1 is the observed start time of the i + 1th flare, and T_start,i is the observed start time of the ith flare. All the optical flare properties should be transferred into the source rest frame, if they have redshift measurements in the following analysis. For the first flare appearing in an optical afterglow, the rest-frame waiting time is simply taken as T_start/(1 + z), where z is redshift. We next study the frequency distributions of the duration time, waiting time, rise time, decay time and peak time of optical flares. Since optical flares and X-ray flares may have a common physical origin, we will compare the results of optical flares with X-ray flares, and check whether both of them show similar distributions of parameters.§ PARAMETERS OF OPTICAL FLARES AND CORRELATIONSFigure 2 shows the waiting time and peak time histogram distributions of the optical flares. The waiting times of optical flares range from 10 s to 10^6 s after the GRB trigger, mainly from 10^2 s to 10^3 s, which is similar to the distribution of X-ray flares. The peak time T_peak of optical flare is in the range of 10^2 s to 10^6 s, mainly in 10^2 s to 10^3 s, occurring at the early time of the optical afterglow, which is consistent with the peak time distribution of X-ray flares. The optical smooth onset bumps are also peaking at the early time of the afterglow, but the optical flares and onset bumps are different from each other. According to the standard forward shock model, the onset of GRB afterglow is characterized by a smooth bump in the early afterglow when the ultra-relativistic fireball is decelerated by the circumburst medium, and these features are well consistent with the forward shock models (Molinari et al. 2007; Liang et al. 2010, 2013; Yi et al. 2013).We use the simple linear regression analysis for parameter fitting[https://en.wikipedia.org/wiki/Simple_linear_regression] (Chatterjee et al. 2000), which is a linear regression model with a single variable. In this paper, we only consider correlations between two parameters of optical flares. So the simple linear regression method is adequate. Interestingly, these correlations have been found in X-ray flares of GRBs (Chincarini et al. 2007, 2010; Yi et al. 2016). Therefore, similar correlations could be expected in the optical flares. If treating multivariate correlations, multiple regression method should be used. Figure 3 presents the strong correlation between the rise and decay times of the optical flares. The rise time is tightly correlated with the decay time with the slope index of 0.99. There is also a strong correlation between the duration time and the peak time, with the slope index of 1.11. These two strong correlations indicate that longer rise times are associated with longer decay times for optical flares, and also suggest that broader optical flares peak at later times. The two correlations of optical flares are in good agreement with the corresponding correlations of X-ray flares, which can be seen from Figure 3 of Yi et al. (2016). The best fitting results of the four correlations are shown in Table 1. These tight correlations suggest that the structures of the optical and X-ray flares are similar, indicating a similar physical origin of them. Besides, Figure 3 also exhibits the correlations between the waiting time and other parameters of the optical flares. The waiting time is correlated with both the peak time and the duration time of optical flares, which indicates a longer waiting time tends to peak at a later time with a longer duration time, which are also consistent with the correlations of X-ray flares. In the fitting, the errors of parameters are not considered. Because there are no parameter errors reported in the optical flares from the second UVOT GRB Catalog (Swenson et al. 2013). In order to test whether the fitting results are biased by parameter errors, we assume the errors are randomly changed in the 10 - 20 % range of the original value. We take the T_waiting-T_peak correlation as an example. After considering the parameter errors, we find that the best-fitted power-law index is 0.90 using the method proposed by Kelly (2007). So this result is consistent with that in Table 1.However, the start and end times of flares are affected by the observational temporal gaps in the Swift light curve, so the time scales of flares, such as, the rise time, decay time and duration time, may be also changed. In order to test this observational bias for the four correlations, we simulate 10^4 times for each correlation. In each simulations, the time scales will be randomly changed in the 0 - 10 % range of the original value. Then we refit the correlations. The best-fitting results are shown in Figure 4. From this figure, we can see that the fitting results from simulations are slightly different from the value derived from observation data. They are consistent with each other at 1σ confidence level.Some other instrumental and observational biases, which tend to disfavor flares with short durations at late times or smooth flares with long durations, will affect the observational correlations. Because the UVOT collects data in event mode during the first ∼1000 s, while later observations are performed in image mode. The former mode has full temporal resolution, the latter mode integrates light over the whole exposure. For this reason, flares with a duration of 100-200 s cannot be easily detected at late times, because the observations average the emission over several hundreds of seconds. Furthermore, the identification of a flare requires also the identification of the underlying continuum. A smooth, longer-lasting flare could be more easily misclassified as continuum and therefore missed. Overall, the temporal correlations of optical flares may derive from instrumental and observational biases, especially for those flares with short durations but at late time, or smooth flares with long durations. We take the T_Duration-T_peak correlation as an example. In order to test the two biases, we provide two groups of simulation data in Figure 5, i.e., optical flares with short durations at late times for the first bias (the red circles), and the smooth flares with long duration times for the second bias (the blue circles). For the first bias, we simulate 500 optical flares with 100 s <T_Duration< 300 s and 10^5 s<T_peak<10^6 s. For the second bias, we simulate 500 optical flares with 5×10^4 s <T_peak< 10^6 s and 5×10^4 s<T_Duration<10^6 s. The simulated red points of the first bias are far away from the best fitting line in Figure 5, implying the correlation may be affected by instrumental bias. For the second bias, if smooth, longer-lasting flares could be identified, because peak time and duration time are almost on the same order of magnitude, they marginally follows the T_Duration-T_peak correlation. § THE FREQUENCY DISTRIBUTIONS OF FLARE PARAMETERS In this section, we use the maximum likelihood estimation method (Bauke 2007) to fit the frequency distributions of optical flare parameters, such as T_waiting, T_Duration, T_peak, T_rise and T_decay. We investigate the differential distributions of parameters for the optical flares. The differential distribution is chosen as a power-law formρ(x)=β x^α_x,where α_x is the power-law index. The occurrence rate of flares in each bin can be calculated from ρ(x)=N/Δ x, where N and Δ x are the number of flares in the bin, and the width of the bin, respectively. We have studied the frequency distributions of X-ray flares in Yi et al. (2016). We focus on the optical flares observed by Swift in this paper. There are 119 GRB optical flares in our sample, and 77 of them have redshifts which consist of a sub-sample. For the total sample, we divide the parameter x, which represents the parameter of flares, into 11 bins. Figure 6 shows the differential distributions of the optical flare in the total sample. The points are the observed data with 1σ errors, which are chosen as the Poisson error. The red curves are the optimal fittings derived with the maximum likelihood estimation method. The optimal fitted parameters α_x for the distributions of the waiting time, duration time, peak time, rise time and decay time are 1.24±0.08, 1.23±0.07, 1.28±0.09, 1.31±0.10 and 1.21±0.07, respectively. The sub-sample is treated with the similar method after the redshift correction, and the optimal fitting results are shown in Figure 7. The optimal fitted parameters α_x for the distributions of waiting time, duration time, peak time, rise time and decay time are 1.30±0.11, 1.29±0.09, 1.29±0.10, 1.27±0.10 and 1.28±0.11, respectively. Figures 6 and 7 show that the differential distributions of the flare parameters can be well described by power-law functions. We find that both optical flares and X-ray flares have similar statistical distributions, so we suppose optical flares and X-ray flares have a common physical origin, which implies that both of them may be powered by activities of central engines.§ DISCUSSIONX-ray flares are the most common phenomena in GRB X-ray afterglows. According to the statistical results of X-ray flares, about more than one-third of Swift GRBs with remarkable flares. However, the number of flares in the UV/optical are far less than those of X-ray. Therefore, not all the optical flares correspond to X-ray flares. Swenson et al. (2013) applied the Bayesian Information Criterion to analyze the residuals of the fitted UV/optical light curve, and identified 119 unique flaring periods. In this paper, we study the properties of optical flares, comparing them with X-ray flares. We check all the optical light curves of Swenson et al. (2013) and X-ray afterglows. We find that most of GRBs in this sample have notable flares simultaneously observed in the X-ray band, but there are still about a dozen GRBs with no distinct flare activities in X-ray band.The temporal behaviors of flares are different from the underlying afterglow emissions, however they are well consistent with those of prompt gamma-ray emissions. Therefore, X-ray and optical flares are supposed to be produced by internal emission powered by central engine. Through a comparative of the afterglow observations, there is evidence suggesting that the optical and X-ray flares originate from similar physical processes (Swenson et al. 2010; Li et al. 2012). Most optical flares usually happen at early time after the prompt emission. However, some flares are even occurring at very late time both in the X-ray and the UV/optical, such as GRB 070318 and GRB 090926A. Flares of these two bursts are not only observed at early time, but matching well at late time greater than 10^5 s in both bands. Therefore, the physical origin of both optical and X-ray flares may be similar, and both of them are related to central engine. However, as discussed above, the presence of flares in one band, but not in another is usually seen. We suppose the primary reason is the lower significance of most flares in the lower energy bands. While the X-ray flares are often easily identified by visual inspection of the light curves, potential optical flares are more often overlooked or dismissed as noise (Swenson et al. 2010; Li et al. 2012; Swenson & Roming 2013). Whether X-ray and optical flares have the same origin remains an important open question, much more observation data are required to answer this question.Interestingly, other bumps are also occurring in the optical afterglows. But they are different from optical flares. Generally, the onset of GRB afterglow is seen by a smooth bump in early optical afterglow light curve as the fireball shell is decelerated by the circumburst medium. Liang et al. (2010) extensively searched for the afterglow onset bump feature from early afterglow light curves, and 20 optical onset bumps are identified. These optical afterglows have smooth bumps, with the rising index for most GRBs is 1-2, and the decay index is 0.44-1.77. These afterglow onset features are well consistent with the external-forward shock (FS) model. Another sharp optical bump is produced by the reverse shock (RS) emission. However, the RS emission is rarely appeared in the optical afterglows. At present, only a small fraction of GRBs shows RS emission in optical afterglows. According to the RS model, the theoretical rising index can be steep as 5 in the thin shell case for a constant interstellar medium (Kobayashi 2000, Yi et al. 2013). One interesting case is GRB 041219A, which shows three significantly power-law rise and fall peaks in the optical-IR band. The first optical peak tracking the gamma-ray light curve during the prompt emission can be understood as emission from internal shocks, while the remaining two peaks are produced by RS and FS component, respectively (Blake et al. 2005; Fan et al. 2005b). Another similar burst is GRB 110205A (Gendre et al. 2012). Therefore, the two optical bumps (FS component and RS component) are attributed to the external shock emission, while the optical flares are related to the internal shocks of the central engine. In our analysis, such bumps are not included. § CONCLUSIONSIn this paper, we have complied 119 optical flares of GRBs taken from Swift/UVOT catalog of Swenson et al. (2013) until December 2010, including 77 flares with redshifts. We studied the parameters of the optical flares, such as the waiting time, duration time, rise time and decay time. We found the waiting times of optical flares range from between 10 s and 10^6 s after the GRB trigger, and the peak time of an optical flare is in the range of 10^2 s to 10^6 s. We also found some tight correlations between these time scales of optical flares. Generally, these tight correlations suggest that longer rise times associate with longer decay times, and also suggest broader optical flares peak at later times. These properties are consistent with the results of X-ray flares, and indicate the structures of optical flares and X-ray flares are similar. However, these correlations may be affected by the instrumental bias, e.g., flares with short durations but at late time are hard to identify by UVOT. We also studied the frequency distributions of the parameters of optical flares. The best-fitting results for the power-law distributions of the parameters of the optical flares are similar with those of X-ray flares. Our results indicate GRB optical flares and X-ray flares may share the similar physical origin, and both of them are related to central engine activities.We thank the anonymous referee for useful comments and suggestions. We thank En-Wei Liang, Xue-Feng Wu and Jie-Shuang Wang for useful comments and helps. This work is supported by the National Basic Research Program of China (973 Program, grant No. 2014CB845800), the National Natural Science Foundation of China (grants 11422325, 11373022, and 11573014), the Excellent Youth Foundation of Jiangsu Province (BK20140016), China Postdoctoral science foundation (grant No. 2017M612233), Science and Technology Program of QuFu Normal University (xkj201614).[Abdo et al.(2011)]2011ApJ...734L..27A Abdo, A. A., Ackermann, M., Ajello, M., et al. 2011, , 734, L27[Blake et al.(2005)]2005Natur.435..181B Blake, C. H., Bloom, J. S., Starr, D. L., et al. 2005, , 435, 181 [Burrows et al.(2005b)]2005Sci...309.1833B Burrows, D. N., Romano, P., Falcone, A., et al. 2005, Science, 309, 1833 [Burrows et al.(2005a)]2005SSRv..120..165B Burrows, D. N., Hill, J. E., Nousek, J. A., et al. 2005, , 120, 165 [Campana et al.(2006)]2006A A...454..113C Campana, S., Tagliaferri, G., Lazzati, D., et al. 2006, , 454, 113 [Chincarini et al.(2010)]2010MNRAS.406.2113C Chincarini, G., Mao, J., Margutti, R., et al. 2010, , 406, 2113 [Chincarini et al.(2007)]2007ApJ...671.1903C Chincarini, G., Moretti, A., Romano, P., et al. 2007, , 671, 1903 [Dănilă et al.(2015)]2015MNRAS.453.2982D Dănilă, B., Harko, T., & Mocanu, G. 2015, , 453, 2982 [Dai et al.(2006)]2006Sci...311.1127D Dai, Z. G., Wang, X. Y., Wu, X. F., & Zhang, B. 2006, Science, 311, 1127 [Falcone et al.(2006)]2006ApJ...641.1010F Falcone, A. D., Burrows, D. N., Lazzati, D., et al. 2006, , 641, 1010 [Falcone et al.(2007)]2007ApJ...671.1921F Falcone, A. D., Morris, D., Racusin, J., et al. 2007, , 671, 1921 [Fan & Wei(2005a)]2005MNRAS.364L..42F Fan, Y. Z., & Wei, D. M. 2005, , 364, L42 [Fan et al.(2005b)]2005ApJ...628L..25F Fan, Y. Z., Zhang, B., & Wei, D. M. 2005, , 628, L25 [Gehrels et al.(2004)]2004ApJ...611.1005G Gehrels, N., Chincarini, G., Giommi, P., et al. 2004, , 611, 1005 [Gendre et al.(2012)]2012ApJ...748...59G Gendre, B., Atteia, J. L., Boër, M., et al. 2012, , 748, 59 [Giannios(2006)]2006A A...455L...5G Giannios, D. 2006, , 455, L5 [Guidorzi et al.(2015)]2015ApJ...801...57G Guidorzi, C., Dichiara, S., Frontera, F., et al. 2015, , 801, 57 [Harko et al.(2015)]2015Ap SS.357...84H Harko, T., Mocanu, G., & Stroia, N. 2015, , 357, 84 [Kelly(2007)]2007ApJ...665.1489K Kelly, B. C. 2007, , 665, 1489 [Kobayashi(2000)]2000ApJ...545..807K Kobayashi, S. 2000, , 545, 807 [Kumar & Zhang(2015)]2015PhR...561....1K Kumar, P., & Zhang, B. 2015, , 561, 1 [Li et al.(2012)]2012ApJ...758...27L Li, L., Liang, E.-W., Tang, Q.-W., et al. 2012, , 758, 27 [Liang et al.(2013)]2013ApJ...774...13L Liang, E.-W., Li, L., Gao, H., et al. 2013, , 774, 13 [Liang et al.(2010)]2010ApJ...725.2209L Liang, E.-W., Yi, S.-X., Zhang, J., et al. 2010, , 725, 2209 [Mészáros(2006)]2006RPPh...69.2259M Mészáros, P. 2006, Reports on Progress in Physics, 69, 2259 [Mészáros & Rees(1997)]1997ApJ...476..232M Mészáros, P., & Rees, M. J. 1997, , 476, 232 [Margutti et al.(2011)]2011MNRAS.417.2144M Margutti, R., Chincarini, G., Granot, J., et al. 2011, , 417, 2144 [Molinari et al.(2007)]2007A A...469L..13M Molinari, E., Vergani, S. D., Malesani, D., et al. 2007, , 469, L13 [Mu et al.(2016b)]2016ApJ...832..161M Mu, H.-J., Gu, W.-M., Hou, S.-J., et al. 2016, , 832, 161 [Mu et al.(2016a)]2016ApJ...831..111M Mu, H.-J., Lin, D.-B., Xi, S.-Q., et al. 2016, , 831, 111 [Nousek et al.(2006)]2006ApJ...642..389N Nousek, J. A., Kouveliotou, C., Grupe, D., et al. 2006, , 642, 389 [Piran(2004)]2004RvMP...76.1143P Piran, T. 2004, Reviews of Modern Physics, 76, 1143 [Romano et al.(2006)]2006A A...450...59R Romano, P., Moretti, A., Banat, P. L., et al. 2006, , 450, 59 [Roming et al.(2009)]2009ApJ...690..163R Roming, P. W. A., Koch, T. S., Oates, S. R., et al. 2009, , 690, 163 [Roming et al.(2005)]2005SSRv..120...95R Roming, P. W. A., Kennedy, T. E., Mason, K. O., et al. 2005, , 120, 95 [Sari et al.(1998)]1998ApJ...497L..17S Sari, R., Piran, T., & Narayan, R. 1998, , 497, L17 [Swenson et al.(2010)]2010ApJ...718L..14S Swenson, C. A., Maxham, A., Roming, P. W. A., et al. 2010, , 718, L14 [Swenson & Roming(2014)]2014ApJ...788...30S Swenson, C. A., & Roming, P. W. A. 2014, , 788, 30 [Swenson et al.(2013)]2013ApJ...774....2S Swenson, C. A., Roming, P. W. A., De Pasquale, M., & Oates, S. R. 2013, , 774, 2 [Wang & Dai(2013)]2013NatPh...9..465W Wang, F. Y., & Dai, Z. G. 2013, Nature Physics, 9, 465 [Wang et al.(2015)]2015ApJS..216....8W Wang, F. Y., Dai, Z. G., Yi, S. X., & Xi, S. Q. 2015, , 216, 8 [Wu et al.(2006)]2006cosp...36..731W Wu, X. F., Dai, Z. G., Wang, X. Y., et al. 2006, 36th COSPAR Scientific Assembly, 36, 731 [Yi et al.(2013)]2013ApJ...776..120Y Yi, S.-X., Wu, X.-F., & Dai, Z.-G. 2013, , 776, 120 [Yi et al.(2015)]2015ApJ...807...92Y Yi, S.-X., Wu, X.-F., Wang, F.-Y., & Dai, Z.-G. 2015, , 807, 92 [Yi et al.(2016)]2016ApJS..224...20Y Yi, S.-X., Xi, S.-Q., Yu, H., et al. 2016, , 224, 20 [Zhang(2007)]2007ChJAA...7....1Z Zhang, B. 2007, ChJAA , 7, 1 [Zhang et al.(2006)]2006ApJ...642..354Z Zhang, B., Fan, Y. Z., Dyks, J., et al. 2006, , 642, 354 [Zhang & Yan(2011)]2011ApJ...726...90Z Zhang, B., & Yan, H. 2011, , 726, 90 cccccccccccccccccccccccccResults of the linear regression analysis for optical flares. R is the Spearman correlation coefficient, P is the chance probability, and δ is the correlation dispersion. 0ptCorrelationsExpressionsRPδ T_decay(T_rise) log T_decay=(0.17± 0.11)+(0.99±0.05)×log T_rise 0.87 <10^-4 0.57 T_duration(T_peak) log T_duration=(-1.00± 0.15)+(1.11±0.05)×log T_peak 0.91 <10^-4 0.45T_peak(T_waiting) log T_peak=(0.47± 0.07)+(0.91±0.03)×log T_waiting 0.96 <10^-4 0.26 T_duration(T_waiting) log T_duration=(-0.56± 0.14)+(1.05±0.05)×log T_waiting 0.90 <10^-4 0.48
http://arxiv.org/abs/1706.08716v1
{ "authors": [ "Shuang-Xi Yi", "Hai Yu", "F. Y. Wang", "Z. G. Dai" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170627082303", "title": "Statistical Distributions of Optical Flares from Gamma-Ray Bursts" }
shapes,backgrounds 10654 OT1rsfs OT1rsfsnit<-> rsfs10 OT1rsfsnit
http://arxiv.org/abs/1706.08559v1
{ "authors": [ "Sema Gunturkun", "Jack Jeffries", "Jeffrey Sun" ], "categories": [ "math.AC" ], "primary_category": "math.AC", "published": "20170626185331", "title": "Polarization of Neural Rings" }
[email protected] Institute of Applied Physics and Computational Mathematics, Fenghao East Road 2, Beijing 100094, P.R. China CAEP Software Center for High Performance Numerical Simulation, Huayuan Road 6, Beijing 100088, P.R. ChinaInstitute of Applied Physics and Computational Mathematics, Fenghao East Road 2, Beijing 100094, P.R. China CAEP Software Center for High Performance Numerical Simulation, Huayuan Road 6, Beijing 100088, P.R. ChinaLaboratory of Computational Physics, Huayuan Road 6, Beijing 100088, P.R. China Institute of Applied Physics and Computational Mathematics, Fenghao East Road 2, Beijing 100094, P.R. China CAEP Software Center for High Performance Numerical Simulation, Huayuan Road 6, Beijing 100088, P.R. China The fast Ewald methods are widely used to compute the point-charge electrostatic interactions in molecular simulations. The key step that introduces errors in the computation is the particle-mesh interpolation. In this work, the optimal interpolation basis is derived by minimizing the estimated error of the fast Ewald method. The basis can be either general or model specific, depending on whether or not the charge correlation has been taken into account. By using the TIP3P water as an example system, we demonstrate that the general optimal basis is always more accurate than the B-spline basis in the investigated parameter range, while the computational cost is at most 5% more expensive. In some cases, the optimal basis is found to be two orders of magnitude more accurate. The model specific optimal basis further improves the accuracy of the general optimal basis, but requires more computational effort in the optimization, and may not be transferable to systems with different charge correlations.Therefore, the choice between the general and model specific optimal bases is a trade-off between the generality and the accuracy. The optimal particle-mesh interpolation basis Xingyu Gao December 30, 2023 =============================================§ INTRODUCTIONThe computation of the electrostatic interaction is an important and non-trivial task in molecular simulations.The difficulty lies in the slow decay of the Coulomb interaction with respect to the particle distance, thus the cut-off method that ignores the particle interactions beyond a certain range usually leads to unphysical artifacts <cit.>. This problem is solved by the Ewald summation <cit.>, which splits the electrostatic interaction into a short-ranged particle-particle interaction that is computed by the cut-off method and an interaction of smeared charges that is computed by solving the Poisson equation in the reciprocal space. The optimal computational expense of the Ewald summation grows in proportion to the three-halves power of the number of particles, and is unaffordable for systems that are larger than several hundreds of particles <cit.>.Instead of the Ewald summation, the fast Ewald methods are widely used nowadays and implemented in molecular simulation packages <cit.>. Some examples are the smooth particle mesh Ewald (SPME) method <cit.>, the particle-particle particle-mesh (PPPM) method <cit.> and the nonequispaced fast Fourier transform based method <cit.>. These methods reduce the computational expense to (Nlog N) (N being the number of particles) byaccelerating the solution of the Poisson equation with the fast Fourier transform (FFT). Although the fast Ewald methods are substantially faster than the Ewald summation, the accuracy is inferior. The only step that introduces errors is the particle-mesh interpolation, which interpolates the particle charges on a uniform mesh, and the solution of the Poisson equation represented on the mesh back to the particles. Therefore, the quality of the interpolation basis plays an important role in the accuracy of the fast Ewald methods <cit.>.All the mentioned fast Ewald methods use the cardinal B-spline basis for the particle-mesh interpolation, which was proved to be superior to the Lagrangian interpolation basis <cit.>. Very recently, Nestler <cit.> and Gao et. al. <cit.> showed that the Bessel and Kaiser-Bessel bases are more accurate than the B-spline basis in certain ranges of the working parameter space, which is spanned bythe splitting parameter (how the two parts of the Ewald summation are split), the mesh spacing and the truncation radius of the interpolation basis. These observations indicate that the B-spline basis that is the “golden standard” in the particle-mesh interpolation can be improved, at least, in part of the parameter space.In this work, the optimal particle-mesh interpolation basis in the sense of minimizing the estimated error of the fast Ewald method is proposed. In our approach, the optimal basis is discretized by the cubic Hermite splines, and the values and derivatives of the basis at the discretization nodes are adjusted by solving an unconstrained optimization problem. We prove that, as long as the system size is large enough, the optimal interpolation basis is system independent, andis determined by a characteristic number defined by the product of the splitting parameter and the mesh spacing. We numerically investigate the accuracy of the optimal interpolation basis in a TIP3P water system,and demonstrate that the optimal basis always outperforms the B-spline and the Kaiser-Bessel bases in the investigated parameter range. In some cases, the optimal basis is more than two orders of magnitude more accurate than the B-spline and the Kaiser-Bessel bases. We also show that the time-to-solution of using the optimal basis is marginally longer than the B-spline basis, and is no more than the Kaiser-Bessel basis. Wereport that the accuracy of the optimal basis is further improved by taking into account the charge correlations during the basis optimization. However, the derived optimal basis is model specific, and cannot be transferred to the simulation with the same product of the splitting parameter and the mesh spacing. This implies that when simulating systems with different charge correlations, the basis should be re-optimized. Therefore, whether or not to consider the charge correlation in the basis optimization is a trade-off between the generality and the accuracy.The manuscript is organized as follows: In Sec. <ref>, the fast Ewald method is introduced briefly. The optimal interpolation basis is proposed in Sec. <ref>, and the generality of the optimal basis is discussed in Sec. <ref>. In Sec. <ref>, the accuracy of the optimal basis is investigated in the TIP3P water system as an example, and the advantage over the B-spline and the Kaiser-Bessel bases is demonstrated. In Sec. <ref>, we show that the accuracy of the optimal basis is further improved, at the cost of generality, by taking into account the charge correlation in the system. The work is concluded in Sec. <ref>.§ THE FAST EWALD METHODSWe consider N point charges that are denoted by {q_1, ⋯, q_N} in a unit cell with periodic boundary condition. The positions of the charges are denoted by { r_1, ⋯,r_N}, respectively. The Coulomb interaction of the unit cell is given byE = 1/2∑_ n ^∗∑_i,j=1^N q_i q_j/| r_ij +n|,where r_ij =r_i -r_j, and r_ij +n is the distance between charge i and all periodic images of charge j, because we have n = n_1a_1 + n_2a_2 + n_3a_3 with (n_1, n_2, n_3) ∈ℤ^3 and ( a_1,a_2,a_3) being the unit cell vectors. The “∗” over the outer summation means that when n=0, the i=j terms should be skipped in the inner summation. The prefactor 1/(4πϵ_0) is omitted for simplicity. The origin of the prefactor 1/2 is explained in Ref. <cit.>. The Ewald summation splits the Coulomb interaction into the direct, reciprocal and correction contributions, i.e. E = E_ + E_ + E_ withE_= 1/2∑^∗_ n∑_i,j = 1^Nq_iq_j (β|r_ij + n|)/|r_ij + n|, E_= 1/2π V∑_ m ≠ 0exp(-π^2 m^2 / β^2)/ m^2 S( m) S(- m), E_= -β/√(π)∑_i=1^N q_i^2.We omit the surface energy term because the spherical summation order and the metallic boundary condition are assumed to the system <cit.>. In the Ewald summation, β > 0 is the splitting parameter that controls the convergence speed of the direct and the reciprocal parts. In the reciprocal energy (<ref>), m = m_1a_1^∗ + m_2a_2^∗ + m_3a_3^∗ with (m_1, m_2, m_3) ∈ℤ^3 and ( a^∗_1,a^∗_2,a^∗_3) being the reciprocal cell vectors defined by a_α· a_γ^∗ = δ_αγ, where α,γ = 1,2,3. V = ( a_1 × a_2) · a_3 is the volume of the unit cell. S( m) = ∑_j=1^N q_j e^2π m· r_j is the structure factor, and the notation “” at the exponent should be understood as the imaginary unit (not a charge index). The magnitude of the structure factor is upper bounded by ∑_j| q_j|.In the direct energy part (<ref>), the complementary error function, i.e. , converges exponentially fast to zero with increasing charge distance | r_ij+ n |, therefore, it can be cut off and the direct energy is computed at the cost of (N) by using the standard cell division and neighbor list algorithms <cit.>. The summand of the reciprocal energy (<ref>) decays exponentially fast as | m| increases, therefore, the infinite summation can be approximated by a finite summation, where m_α ranges from -K_α/2 to K_α/2 - 1 with K_α being the number of terms summed on direction α. A naive choice of K_α that preserves the accuracysatisfies K_1K_2K_3 ∝ N, thus the computational complexity of the reciprocal energy is (N^2).The fast Ewald methods interpolate the point charges on a K_1× K_2× K_3 uniformmesh, then accelerate the computation of the structure factor, which isa discretized Fourier transform of the charge distribution, by the fast Fourier transform (FFT).Taking the SPME method as an example, the interpolation of the single particle charge contribution to the mesh (on direction α) reads <cit.>q e^2π m_α u_α/K_α≈1/K_α(m_α)∑_l∈ I_K q(u_α - l) e^2π m_α l / K_α,where u is the scaled coordinate that is defined by u_α = K_α r_α with r_α =a^∗_α· r. I_K = { l∈ℤ: -K/2 ≤ l < K/2}.is the interpolation basis that is usually assumed to be truncated with radius C, which means the value ofout of the range [-C, C] is assume to be 0. The Fourier transform ofon I_K is denoted by . By using the approximation (<ref>), the reciprocal energy is proved to be <cit.> E_≈∑_l_1,l_2,l_3 Q(l_1,l_2,l_3) [ Q ∗ (F B^2)^∨] (l_1,l_2,l_3),where “∗” denotes the convolution, “∨” denotes the inverse discrete Fourier transform, andF( m) = 1/2π V×exp(-π^2 m^2 / β^2)/ m^2 | m |≠ 0,0| m|= 0, B( m) = ∏_α1/(m_α), P_ r(l_1,l_2,l_3) = ∏_α(u_α - l_α), Q (l_1,l_2,l_3)= ∑_j q_j P_ r_j (l_1,l_2,l_3).Q(l_1,l_2,l_3) is the interpolated charge distribution on the mesh. The interpolation of single particle to mesh, viz. the computation of P_ r(l_1, l_2, l_3), can be accomplished in (1) operations due to the compact support of the interpolation basis , thus the computational cost of Q = ∑_j q_j P_ r_j is (N). By using the identity Q ∗ (F B^2)^∨ = [ Q̂× (F B^2)]^∨, the computation of the convolution in Eq. (<ref>) is converted to a forward discrete Fourier transform of Q, a multiplication between Q̂ and FB^2, and then a backward transform of Q̂× (F B^2).The computational cost of the multiplication is (N) and that of the fast Fourier transforms is (Nlog N), thus the total computational cost of the reciprocal energy (<ref>) is (Nlog N). The reciprocal force of a charged particle can be computed in two ways. The first way, known as ik-differentiation, takes negative gradient of the reciprocal energy (<ref>) with respect to particle coordinate r, then approximates the force by the particle-mesh interpolation.It leads to <cit.>F^_,i≈ q_i ∑_l_1,l_2,l_3 P_ r_i (l_1,l_2,l_3) [ Q ∗ ( G B^2)^∨] (l_1,l_2,l_3),whereG( m) = -4π m F( m).The second way, known as analytical differentiation, takes negative gradient of the approximated reciprocal energy (<ref>), and yields <cit.>F^_,i≈ q_i ∑_l_1,l_2,l_3 - 2 ∇_ r_i P_ r_i (l_1,l_2,l_3) [ Q ∗ (F B^2)^∨] (l_1,l_2,l_3). In the energy approximation (<ref>) and force approximations (<ref>) and (<ref>), the only step that introduces errors is the particle-mesh interpolation (<ref>),thus the interpolation basis plays a crucial role in the accuracy of the fast Ewald methods. In the original work of the SPME and PPPM methods <cit.>, the B-spline basis was proposed for the interpolation. An n-th order B-spline basis is defined in a recursive way:_n(x) = K _n-1∗_1(x), _1(x) = χ_[-1/2,1/2](x),where χ_[-1/2,1/2](x) is the characteristic function of interval [-1/2,1/2]. The order n B-spline basis has a compact support of [-n/2, n/2], and is (n-2)-th order differentiable. The basis truncation is usually taken as C=n/2. The Kaiser-Bessel basis of truncation C is defined by(x) = sinh (π√(C^2 - x^2) ) /π√(C^2 - x^2),whereis the shape parameter, which can be determined by optimizing against the reciprocal force error <cit.>. § THE OPTIMAL INTERPOLATION BASISThe quality of the interpolation basis can be investigated by evaluating the error introduced in the computation. In the context of molecular dynamics (MD) simulations, the error is usually defined by theroot mean square (RMS) error of the reciprocal force computation, i.e.= √(⟨|Δ F_|^2⟩), Δ F_ =F_ -F_^∗,where⟨·⟩ denotes the ensemble average, andF_ and F_^∗ denote the computed and exact reciprocal forces, respectively. It has been shown that the RMS reciprocal error is composed ofthe homogeneity, the inhomogeneity and the correlation parts <cit.>||^2 = |_|^2 + |_|^2 + _.The homogeneity error _ stems from the fluctuation of the error force Δ F_. The inhomogeneity error _ originates from the inhomogeneous charge distribution,andhas been shown to vanish when the system is locally neutral, which is the case in most realistic systems.The correlation error _ contributes when the positions of the charges are correlated. For example, the partial charges in a classical point-charge water system is correlated via the covalent bonds, the hydrogen bonds and the van der Waals interactions.An error estimate is an analytical expression of the RMS reciprocal force error in terms of the working parameters including the splitting parameter, the mesh spacing and the basis truncation. If the system is locally neutral and the positions of the charges are uncorrelated, the inhomogeneity and the correlation errors vanish, and the errors of the ik- and analytical differentiation force schemes are estimated by <cit.>|^|^2= |^_|^2 ≈ 2 q^2 Q^2 ∑_ m∑_α,l≠ 0^2_α,l( m), |^|^2= |^_|^2 ≈ q^2 Q^2 ∑_ m∑_α,l≠ 0^2_α,l( m) + q^2 Q^2 ∑_ m∑_α,l≠ 0 (_α,l + _α,l)^2( m),where Q^2 = ∑_i q_i^2, q^2 = Q^2 / N, and we have the short-hand notations_α,l( m)=G ( m) Z_α,l( m), _α,l( m)= -4π l K_α a_α^∗ F( m) Z_α,l( m), Z_α,l( m)= (m_α + l K_α)/(m_α). The RMS reciprocal error estimates (<ref>) and (<ref>) clearly depend on the interpolation basis. The optimal interpolation basis in terms of minimal numerical erroris determined by solving the unconstrained optimization problemmin_ [],wherecan be estimated by either (<ref>) or (<ref>) for ik- or analytical differentiation, respectively. It is reasonable to assume that the interpolation basis is an even function on [-C, C], thus only the value ofin range [0, C] should be determined. We uniformly discretize the range [0, C] by M nodes that are denoted by x_i = i with = C/M, and use the following ansatz to construct (x) = η_0(x) + ∑_i=1^M-1λ_i η_i(x) + ∑_i=1^M-1ν_i θ_i(x),where η_0 (x)= H_00(x - x_i/),η_i (x)= H_01(x - x_i-1/) + H_00(x - x_i/),θ_i (x)=H_11(x - x_i-1/) +H_10(x - x_i/).H_00, H_01, H_10 and H_11 are cubic Hermite splines (third order piecewise polynomials, details provided in Appendix <ref>). By this construction, the interpolation basis (x) is first order differentiable and is normalized by (0) = 1. The basis smoothly vanishes at the truncation x = C, viz. (C) = 0 and '(C) = 0. The prefactors {λ_i} and {ν_i} are the values and derivatives of the basis at the discretization nodes, because it can be proved that λ_i = (x_i) and ν_i = '(x_i) (see Appendix <ref> for details). By inserting Eq. (<ref>) into error estimates (<ref>) and (<ref>), the functionals of the interpolation basisare converted into functions of {λ_i} and {ν_i}. Thus the optimization problem (<ref>) is discretized asmin_{λ_i}, {ν_i}({λ_i}, {ν_i}).Since the error estimates (<ref>) and (<ref>) are positive definite and areprovided in squared form, it is more convenient to solve the equivalent optimization problemmin_{λ_i}, {ν_i}|({λ_i}, {ν_i})|^2.This is a classic unconstrained optimization problem, which can be solved by well established algorithms such as conjugate gradient method or Broyden-Fletcher-Goldfarb-Shanno (known as BFGS) method <cit.>. Our code uses the implementation of the BFGS method from Dlib <cit.>. § THE GENERALITY OF THE OPTIMAL BASISThe error estimates (<ref>) and (<ref>)depend on the system, thus the optimization problem (<ref>) should be solved for each specific systemto obtain the optimal basis. In this section, we demonstrate that there exists a general optimal basisthat minimizes the estimated error in systems with different amounts of charge per particle,different number of charges and different system sizes, as long as the system is large enough.To begin with, it is more convenient to investigate the properties of the error estimates if the summations in (<ref>) and (<ref>) are converted to integrations, which are referred to as the continuous forms of the error estimates. To simplify the discussion, we assume that the simulation region is cuboid. We denote the mesh spacing on direction α by h_α =L_α / K_α, where L_α = | a_α|, and further assume that the mesh spacings are roughly the same on all directions, i.e. h ≈ h_1 ≈ h_2 ≈ h_3. The continuous form of the error estimate (<ref>) is(see Appendix <ref> for details)|^_|^2 ≈4 q^4ρβ/(β h)^3∫_I^3 g^2( |μ|/(β h)) ∑_α,l≠ 0 2z^2(μ_α, l) dμ,where ρ = N/V is the number density of charged particles in the system. The integration region is defined by I^3 = [-1/2,1/2] × [-1/2,1/2] × [-1/2,1/2]. We denoteμ = (μ_1,μ_2,μ_3), g(m) = e^-π^2 m^2 / m, and z(μ_α, l) = (μ_α+l)/(μ_α)with (μ) = ∫_[-C,C](x) e^-2πμ x dx. The continuous form of the error estimate (<ref>) is|^_|^2 ≈4q^4ρβ/(β h)^3∫_I^3 g^2( |μ|/(β h)) ∑_α,l≠ 0[ (1 + |μ + l e_α|^2/|μ|^2) z^2(μ_α, l) ] dμ.It is noted that the continuous forms (<ref>) and (<ref>) do not depend on the system size. As shown in Appendix <ref>, the standard error estimates (<ref>) and (<ref>) are discretizations ofthe continuous error estimates (<ref>) and (<ref>), respectively, with K_α being the number of discretization points on direction α. The difference between the standard and the continuous error estimates is the error of the discretization, which vanishes as K_α goes to infinity. It is noted that when taking the limit, the mesh spacing h is fixed, thus the limit implies that the system size L_α = h K_α goes to infinity. Therefore, when the system is large enough, the optimal bases that minimize the standard error estimates converge to the bases that minimize the corresponding continuous error estimates, which are system-size independent.Before further investigating the properties of the optimal basis, we will firstly show when a standard error estimate converges to its continuous form. The standard error estimate is close to the continuous formonly when the variation of the integrand is resolved by enough discretization points. In order to resolve the variation of g^2(|μ|/(β h)), the characteristic size β h / (√(2) π) should bediscretized by enough points. If two discretization points are required for the size β h / (√(2) π),then K_α≥ 2√(2)π / (β h), or equivalently L_α = hK_α≥ 2 √(2)π / β. This indicates that converged system size is inversely proportional to the splitting parameter β. Taking the B-spline basis for instance, the convergence of the error estimate (<ref>) with respect to the system size is numerically investigated in Fig. <ref>. When β = 1.0 nm^-1, the system size should be larger than 4.5 nm to have a converged error estimate. When β increases to 1.5 nm^-1, the minimal system size is 3.0 nm. The minimal system size reduces to 2.3 nm when β = 2.0 nm^-1. Although the minimal size is lower than the rough estimate 2√(2)π/β≈ 8.9/β,the inversely proportional relation between the minimal size and the splitting parameter is confirmed. It is also observed that the minimal system size does not depend on the basis truncation.In this work, if not stated otherwise, we always assume that the system size is large enough, so the error estimates (<ref>) and (<ref>) converge to the continuous forms (<ref>) and (<ref>), respectively, and the optimal bases are also converged. The properties of an optimal basis can be analyzed by investigating the corresponding continuous error estimate. For a given basis truncation C, the optimal basis has the following properties* The optimal basis is independent of the amount of charge per particle.* The optimal basis is independent of the number of chargedparticles.* The optimal basis is independent of the size of the system.* The optimal basis is completely determined by the characteristic number β h.Property 1 holds because the average amount of charge q^2 = ∑_i q_i^2/N is a prefactor of the error estimates (<ref>) and (<ref>). Property 3 holds because the error estimates are independent of the system size. Property 2 holds because the number density ρ is a prefactor of the error estimates, and because Property 3 holds. When the truncation C is fixed, the integrands of (<ref>) and (<ref>) only depend on the number β h via function g^2(|μ|/(β h) ), so the optimal basis is completely determined by this number. Therefore, the optimal interpolation basis is system independent in the sense that it applies to systems with different amounts of charge per particle, different numbers of charges and different system sizes.Due to the universality of the optimal interpolation basis, the solutions to the optimization problem (<ref>) are stored in a database. The number of discretization nodes of the basis is set to M = 40 C. The mesh spacing is set to 0.117 nm. The number of mesh points is K_α = 64 to ensure the convergence of error estimates. The basis is optimized for a β sequence starting from 1.0 nm^-1, increasing with a step of0.2 nm^-1, and ending at3.4 nm^-1, and from 4.0 nm^-1, increasing with a step of1.0 nm^-1, and ending at7.0 nm^-1. This provides the optimal bases for a β h sequence that ranges from 0.117 to0.818. The optimal basis of the β h that is not in the sequence is constructed by linear interpolation of neighboring optimal basis in the sequence.§ THE NUMERICAL RESULTSIn this section, we investigate the RMS reciprocal force error of the B-spline, the Kaiser-Bessel and the optimal bases in a TIP3P <cit.> water system that has 13824 molecules. Each water molecule is modeled by three point charges connected bycovalent bonds. The oxygen atom has a partial charge of -0.834e while the hydrogen atom has a partial charge of 0.417 e. The O-H bond length is constrained to 0.09572 nm and the H-O-H angle isconstrained to 104.52^∘. The simulation region is of size 7.48 nm× 7.48 nm× 7.48 nm,and is subjected to the periodic boundary condition. The water configuration is taken from an equilibrated NPT simulation <cit.>. The computed reciprocal force is compared with a well converged Ewald summation, and the RMS reciprocal force error is computed by definition (<ref>). In Fig. <ref>, we report the RMS force errors of the B-spline (dashed line), the Kaiser-Bessel (dotted line) and the optimal (solid line) bases using the ik-differentiation force scheme. In Fig. <ref>, we report the RMS force errors of the B-spline (dashed line) and the optimal (solid line) bases using the analytical differentiation force scheme. In both figures,the error is plotted against the splitting parameter β. The three plots, from top to bottom, present the results of basis truncations C= 2, 3 and 4, respectively. In each plot, the green, red and blue linespresent the errors of the mesh spacings h =0.234, 0.117 and 0.0584 nm, respectively. In all cases, the optimal basis is more accurate than the B-spline and the Kaiser-Bessel bases. In some cases, the optimal basis achieves two orders of magnitude higher accuracy, for example, the ik-differentiation with C = 4, β = 3.0 nm^-1 and h = 0.0584 nm. The advantage of the optimal basis is observed to be more significant for smaller splitting parameters,and smaller mesh spacings. Taking the ik-differentiation with C=3 andh =0.117 nm for example, the optimal basis is 2.1, 9.7 and 35 times as accurate as the B-spline basis and is 1.0, 1.9, and 34 times as accurate as the Kaiser-Bessel basis at β = 6.0, 4.0 and 2.0 nm^-1, respectively. Taking C = 3 and β = 3.0 nm^-1 for example, the optimal basis is 1.8, 24 and 35 times as accurate as the B-spline basis,and is 1.0, 6.9 and 165 times as accurate as the Kaiser-Bessel basis at mesh spacings h = 0.234, 0.117 and 0.0584 nm, respectively.It is noted that the basis that was optimized at h = 0.117 nm is used for the simulations at h = 0.234 nm and h = 0.0584 nm if the products of β h are the same. For example, the basis optimized for {h = 0.117 nm, β = 2.0 nm^-1} is used for the simulation at {h = 0.234 nm, β = 1.0 nm^-1} and at {h = 0.0584 nm, β = 4.0 nm^-1}. The bases were optimized for the β h range from 0.117 to 0.818 nm, thus, in Figs. <ref> and <ref>,we do not havethe optimal bases for β > 3.5 nm^-1 at h = 0.234 nm, nor the optimal bases for β < 2.0 nm^-1 at h = 0.0584 nm. The number of floating point operations attributed to the interpolation basis is the number of floating point operations of each basis evaluation timesthe number of evaluations of the basis. The numbers of evaluations are the same if two bases have the same truncation C. The optimal basis is a M = 40C pieces cubic piecewise polynomial, while the B-spline basis is a C pieces (2C-1)-th order piecewise poly-nominal. For C = 2, each evaluation of the optimal basis needs as many floating point operations as the B-spline basis. For C ≥ 3 each evaluation of the optimal basis needs less floating point operations than the B-spline basis [ The evaluation of Kaiser-Bessel basis requires a square root and a hyperbolic sine function, which are usually much more expensive than the polynomials. It should be noted that, in the productive codes, the Kaiser-Bessel basis is usually implemented by cubic interpolation of tabulated values,thus each evaluation needs as many floating point operations as the optimal basis. ]. On the other hand, the number of the polynomial pieces of the optimal basis is larger than the B-spline basis, thus the cache missing rate of the polynomial coefficients is likely to be higher than the B-spline basis. We investigate the time-to-solution of the particle-mesh interpolation of the TIP3P water system on a desktop computer with an Intel i7-3770 CPU and 32 GB memory. Only one core of the CPU was used in the tests. The in-house MD software MOASP was compiled by GCC 4.7 with double precision floating point. The force scheme was ik-differentiation, and the mesh spacing was 0.117 nm. For the truncation C=2, the time-to-solutions of the optimal and the B-spline bases were 0.105 and 0.100 seconds, respectively. The optimal basis was only 5% slower than the B-spline basis. For C=4, the time-to-solutions were 0.331 and 0.329 seconds, respectively. In this case, the difference between the optimal and B-spline bases was less than 1%. In any case, the difference between the optimal and the B-spline bases in terms of time-to-solution is not significant. § OPTIMIZED WITH ERROR ESTIMATE IN CORRELATED CHARGE SYSTEMSIn Sec. <ref>, the basis is optimized by minimizing theestimated error that only includes the homogeneity error contribution. This estimate, however, may not be able to precisely reflect the error in correlated charge systems. Taking the TIP3P water system for example, the covalently bonded atoms in one molecule have opposite charge signs and form a neutral charge group, and the error is usually reduced by this charge correlation <cit.>. The solution is to introduce the estimate of the correlation errorto describe the reciprocal force error in a more precise way, i.e.||^2 = |_|^2 + _,It has been shown that, by using the TIP3P water system as an example, introducing the bonded charge correlation in the error estimate leads to a substantial improvement of the quality of the estimate,and the improved estimate is good enough for the purpose of parameter tuning <cit.>. The correlation errors of the ik- and analytical differentiations are estimated by <cit.>^_≈q^2 Q^2 ∑_ m∑_α,l≠ 0 [T^w(| m|) + T^w(| m + l K_α a_α^∗|)] ^2_α,l( m),^_≈q^2 Q^2 ∑_ m∑_α,l≠ 0 T^w(| m|) (_α,l + _α,l)^2( m) + q^2 Q^2 ∑_ m∑_α,l≠ 0 T^w(| m + l K_α a_α^∗|) ^2_α,l( m),respectively, where andare notations introduced by Eq. (<ref>) and (<ref>). If the charge correlation due to the covalent bonds is considered, the term T^w is defined byT^w( m)= 4/2^2 + ^2sin(2π m)/2π m+ 2^2/2^2 + ^2sin(2π m )/2π m .The notationsanddenote the partial charges of the oxygen and hydrogen atoms, respectively. denotes the length of the covalent bond between the oxygen and hydrogenatoms, whiledenotes the distance between the two hydrogen atoms.Taking the TIP3P water model for example, = -0.834e, = 0.471e, = 0.09572 nm and = 0.15139 nm.The continuous form of the correlation error estimate for the ik-differentiation is given by^_≈4 q^4ρβ/(β h)^3∫_I^3 g^2( |μ|/(β h)) ∑_α,l≠ 0{ [ T^w(|μ|/h) + T^w(|μ + l e_α|/h) ] z^2(μ_α, l) }dμ.In the case of analytical differentiation, the continuous estimate is ^_≈ 4q^4ρβ/(β h)^3∫_I^3 g^2( |μ|/(β h)) ×∑_α,l≠ 0{[ T^w(|μ|/h) (1 + |μ + l e_α|^2/|μ|^2) + T^w(|μ + l e_α|/h) ] z^2(μ_α, l) }dμ.If the system is large enough, the standard estimates (<ref>) and (<ref>) converge tothe continuous estimates, i.e. (<ref>) and (<ref>), respectively. Using similar arguments as those in Sec. <ref>, the optimal basis taking into account the charge correlation has the properties as follows* The optimal basis is independent of the number of water molecules.* The optimal basis is independent of the system size.Unlike the optimal basis that only minimizes the homogeneity error, the optimal basis considering the charge correlation is model specific, because the value of the function T^w in the integrands of (<ref>) and (<ref>) depends on the amounts of the partial charge and the geometry of water molecule. In the estimates (<ref>) and (<ref>), the variation of the integrand is not only characterized by the dimensionless number β h, but also by the dimensionless numbers/ h and /h that indicate how fine the water geometry is resolved by the mesh. Therefore, a basis optimized under a certain pair of the splitting parameter β and mesh spacing h cannot be transferred to another pair with the same product.Taking the TIP3P water system as an example, we numerically compare the basis that optimizes the homogeneity error and the basis that optimizes the estimated error including the charge correlation, i.e. Eq. (<ref>). The former basis is model-independent, while the latter is model specific, thus we refer to them as the general optimal basis and the TIP3P optimal basis, respectively. In Fig. <ref>, we present the accuracy of the B-spline basis (dashed lines), the general optimal basis (dotted lines) and the TIP3P optimal basis (solid lines). The basis truncation was set to C=2 for all cases. The force scheme is analytical differentiation. The green, red and blue lines represent the errors of the mesh spacing h = 0.234, 0.117 and 0.0584 nm, respectively. The general bases were optimized for different splitting parameter β at h = 0.117 nm, and were transferred to other mesh spacings if the products β h are the same. The TIP3P optimal bases were optimized for all the investigated combinations of the splitting parameter and mesh spacing. It is observed that the TIP3P optimal basis is more accurate compared with the general optimal basis,and the advantage is more obvious for a smaller mesh spacing. Taking β = 3.0 nm^-1 for example,the TIP3P optimal basis reduces the error by 13%, 34% and 39% compared with the general optimal basis at mesh spacings h = 0.234, 0.117 and 0.0584 nm, respectively. It should be noted that the cost of better accuracy is the model generality. The TIP3P optimal basis is specifically optimal for the TIP3P water system or systems dominated by the TIP3P water. It is not guaranteed that the TIP3P optimal basis is also optimal for other water models or other molecular systems with different charge correlations. In these systems, if the model specific optimal basis is not available, the general optimal basis is recommended.§ CONCLUSIONIn this manuscript, the optimal particle-mesh interpolation basis that minimizes the estimated RMS force error is proposed for the fast Ewald method. It is demonstrated that the optimal basis achieves significantly higher accuracy than the widely used B-spline basis for both the ik- and analytical differentiation force schemes, at a cost of marginally (less than 5%) longer computational time. We prove that the optimal basis is system independent, and isdetermined by a characteristic number that is the product of the splitting parameter and the mesh spacing.Therefore, it is convenient to build a database of the general optimal bases and to integrate them into existing MD packages. By taking into account the charge correlation, the accuracy of the optimal basis is further improved.However, the cost of this improvement is the generality.We show that the optimal basis derived in this way is specific to a molecular model, and should be optimized for all possible combinations of the splitting parameter and mesh spacing. Therefore, the choice between the general optimal basis and the model specific optimal basis is a trade-off between the generality and the accuracy.§ ACKNOWLEDGMENTH.W. is supported by the National Science Foundation of China under Grants 11501039 and 91530322. X.G. is supported by the National Science Foundation of China under Grant 91430218.The authors gratefully acknowledge the financial support from the National Key Research and Development Program of China under Grants 2016YFB0201200 and 2016YFB0201203,and the Science Challenge Project No. JCKY2016212A502.§ THE CUBIC HERMITE SPLINESThe cubic Hermite splines H_00, H_01, H_10 and H_11 are defined, on the interval u ∈ [0, 1], byH_00(u) = (1+2u)(1-u)^2,H_01(u) = u^2 (3 - 2u), H_10(u) = u (1-u)^2, H_11(u) = u^2 (u-1).It can be easily shown thatH_00(0)= 1,H_00(1) = H'_00(0) = H'_00(1) = 0, H_01(1)= 1,H_01(0) = H'_01(0) = H'_01(1) = 0, H'_10(0)= 1,H_10(0) = H_10(1) = H'_10(1) = 0, H'_11(1)= 1,H_11(0) = H_11(1) = H'_11(0) = 0.Therefore, the ansatz functions η_i(x) and θ_i(x) defined by Eq. (<ref>)–(<ref>) are supported on the interval x∈[x_i-1, x_i+1], andhave the following properties:η_i(x_i) = 1, η_i(x_i-1) = η_i(x_i+1) = 0,η'_i(x_i-1) = η'_i(x_i) = η'_i(x_i+1) = 0, θ_i(x_i-1) = θ_i(x_i) = θ_i(x_i+1) = 0, θ'_i(x_i) = 1, θ'_i(x_i-1) = θ'_i(x_i+1) = 0.Therefore, the interpolation basis given by Eq. (<ref>) has the properties:(0) =1, (C) = 0, '(0) = '(C) = 0, (x_i) = λ_i, '(x_i) = ν_i. § PROOF OF THE ERROR ESTIMATE IN THE CONTINUOUS FORM For simplicity, we consider a simulation region of cuboid shape, and denote L_α = | a_α|, then a_α = L_α e_α, where e_α is the unit vector on direction α. In the reciprocal space, a_α^∗ = (1/L_α)e_α. The error estimate of the homogeneity error of the ik-differentiation is|^_|^2 = 2 q^2 Q^2 ∑_ m G^2( m) ∑_α,l≠ 0|(m_α + l K_α)/(m_α)|^2.The summation on the r.h.s. can be considered as an approximation to the integration|^_|^2 = 2q^2 Q^2 ∫ d mG^2( m) ∑_α,l≠ 0|(m_α + l K_α)/(m_α)|^2.We change the integration variable from m_α to μ_α = m_α / K_α, then the error estimate (<ref>) becomes|^_|^2 = 2q^4 ∫ d μ N K_1K_2K_3G^2( ∑_αμ_α h^-1 e_α) ∑_α,l≠ 0|(μ_α + l)/(μ_α)|^2,where we used the identity Q^2 = Nq^2, and m = ∑_α m_α a_α^∗ = ∑_αμ_α K_α1/L_α e_α = ∑_αμ_α h^-1 e_α.In the last equation of (<ref>), we assumed that the mesh spacing h_α = L_α/K_α is roughly the same on all directions, i.e. h_α≈ h. The notationin Eq. (<ref>) is the Fourier transform of the interpolation basis represented by the new variable μ:(μ) = 1/K∫_-C^C(x) e^-2π i μ x dx = 1/K∫_-K/2^K/2(x) e^-2π i m x/K dx = (m).In the second equation of (<ref>), we noticed that the interpolation basisis supported on [-C,C]. The function G is defined byG( m) = - 2m/ Vexp(-π^2 m^2/β^2)/ m^2, | m|≠ 0.It is easy to show thatNK_1K_2K_3G^2( m) =4ρ/h^3exp(-2π^2 m^2/β^2)/ m^2, | m|≠ 0.Inserting (<ref>) into the error estimate (<ref>) yields|^_|^2 = 8ρ q^4β/h^3β^3∫ dμexp(-2π^2 |μ|^2 / (β h)^2)/|μ|^2 /(β h)^2 ∑_α,l≠ 0|(μ_α + l)/(μ_α)|^2.The continuous error estimate (<ref>) is proved. The continuous estimate of the homogeneity error of the analytical differentiation (<ref>),and the estimates of the correlation errors of the ik- and analytical differentiations (<ref>) and (<ref>) can be proved analogously.If we discretize the continuous form of the error estimate (<ref>) by K_α discretization points on direction α,we have, by replacing dμ_α with 1/K_α and μ_α with m_α / K_α, |^_|^2 = 8ρ q^4β/h^3β^3∑1/K_1K_2K_3exp(-2π^2 |∑ (m_α/K_α) e_α|^2 /( β h)^2)/|∑ (m_α/K_α) e_α|^2 /(β h)^2 ∑_α,l≠ 0|(m_α + l K_α)/(m_α)|^2,where we have used Eq. (<ref>). Noticing that m = ∑ m_α/(hK_α)e_α and the identity (<ref>),the standard error estimate (<ref>) is recovered. Therefore, the standard error estimate is the discretization of the integration in the continuous form,and the difference will vanish as the number of discretization nodes K_α goes to infinity.10vanderSpoel2006origin D. van der Spoel and P.J. van Maaren. The origin of layer structure artifacts in simulations of liquid water. Journal of Chemical Theory and Computation, 2(1):1–11, 2006.ewald1921die P. P. Ewald. Die berechnung optischer und elektrostatischer gitterpotentiale. Ann. Phys., 369(3):253–287, 1921.pollock1996comments EL Pollock and J. Glosli. Comments on p3m, fmm, and the ewald method for large periodic coulombic systems. Computer Physics Communications, 95(2-3):93–110, 1996.pronk2013gromacs S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M.R. Shirts, J.C. Smith, P.M. Kasson, D. van der Spoel, B. Hess, and E. Lindahl. Gromacs 4.5: a high-throughput and highly parallel open source molecular simulation toolkit. Bioinformatics, page btt055, 2013.phillips2005scalable J.C. Phillips, R. Braun, W. Wang, J. Gumbart, E. Tajkhorshid, E. Villa, C. Chipot, R.D. Skeel, L. Kale, and K. Schulten. Scalable molecular dynamics with namd. Journal of computational chemistry, 26(16):1781–1802, 2005.plimpton1995fast S. Plimpton. Fast parallel algorithms for short-range molecular dynamics. Journal of Computational Physics, 117(1):1–19, 1995.darden1993pme T. Darden, D. York, and L. Pedersen. Particle mesh ewald: An n· log (n) method for ewald sums in large systems. The Journal of Chemical Physics, 98:10089, 1993.essmann1995spm U. Essmann, L. Perera, M.L. Berkowitz, T. Darden, H. Lee, and L.G. Pedersen. A smooth particle mesh ewald method. The Journal of Chemical Physics, 103(19):8577, 1995.hockney1988computer R. W. Hockney and J. W. Eastwood. Computer Simulation Using Particles. IOP, London, 1988.deserno1998mue1 M. Deserno and C. Holm. How to mesh up ewald sums. i. a theoretical and numerical comparison of various particle mesh routines. The Journal of Chemical Physics, 109:7678, 1998.hedman2006ewald F. Hedman and A. Laaksonen. Ewald summation based on nonuniform fast fourier transform. Chemical physics letters, 425(1-3):142–147, 2006.pippig2013pfft Michael Pippig. Pfft: An extension of fftw to massively parallel architectures. SIAM Journal on Scientific Computing, 35(3):C213–C236, 2013.deserno1998mue2 M. Deserno and C. Holm. How to mesh up ewald sums. ii. an accurate error estimate for the particle–particle–particle-mesh algorithm. The Journal of Chemical Physics, 109:7694, 1998.wang2010optimizing H. Wang, F. Dommert, and C. Holm. Optimizing working parameters of the smooth particle mesh ewald algorithm in terms of accuracy and efficiency. The Journal of chemical physics, 133:034117, 2010.ballenegger2012convert V. Ballenegger, J.J. Cerdà, and C. Holm. How to convert spme to p3m: influence functions and error estimates. Journal of Chemical Theory and Computation, 8:936–947, 2012.wang2012numerical H. Wang, P. Zhang, and C. Schütte. On the numerical accuracy of ewald, smooth particle mesh ewald, and staggered mesh ewald methods for correlated molecular systems. Journal of Chemical Theory and Computation, 8(9):3243–3256, 2012.nestler2016parameter Franziska Nestler. Parameter tuning for the nfft based fast ewald summation. Frontier in Physics, 4, 2016.gao2017kaiser Xingyu Gao, Jun Fang, and Han Wang. Kaiser-bessel basis for the particle-mesh interpolation. Physical Review E, 2017.ballenegger2009simulations V Ballenegger, A Arnold, and JJ Cerda. Simulations of non-neutral slab systems with long-range electrostatic interactions in two-dimensional periodic boundary conditions. The Journal of chemical physics, 131(9):094107, 2009.de1980simulation Simon W de Leeuw, John William Perram, and Edgar Roderick Smith. Simulation of electrostatic systems in periodic boundary conditions. i. lattice sums and dielectric constants. In Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, volume 373, pages 27–56. The Royal Society, 1980.de1980simulation2 SW De Leeuw, JW Perram, and ER Smith. Simulation of electrostatic systems in periodic boundary conditions. ii. equivalence of boundary conditions. In Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, volume 373, pages 57–66. The Royal Society, 1980.frenkel2001understanding D. Frenkel and B. Smit. Understanding molecular simulation. Academic Press, Inc. Orlando, Fl, USA, 2010.wang2016multiple Han Wang, Xingyu Gao, and Jun Fang. Multiple staggered mesh ewald: Boosting the accuracy of the smooth particle mesh ewald method. Journal of Chemical Theory and Computation, 12:5596–5608, 2016.fletcher1987practical R. Fletcher. Practical methods of optimization. 1987.king2009dlib Davis E King. Dlib-ml: A machine learning toolkit. Journal of Machine Learning Research, 10(Jul):1755–1758, 2009.jorgensen1983comparison William L Jorgensen, Jayaraman Chandrasekhar, Jeffry D Madura, Roger W Impey, and Michael L Klein. Comparison of simple potential functions for simulating liquid water. The Journal of chemical physics, 79(2):926–935, 1983.gao2016sampling Xingyu Gao, Jun Fang, and Han Wang. Sampling the isothermal-isobaric ensemble by langevin dynamics. The Journal of chemical physics, 144(12):124113, 2016.
http://arxiv.org/abs/1706.08741v1
{ "authors": [ "Han Wang", "Jun Fang", "Xingyu Gao" ], "categories": [ "physics.chem-ph", "physics.comp-ph" ], "primary_category": "physics.chem-ph", "published": "20170627091711", "title": "The optimal particle-mesh interpolation basis" }
http://arxiv.org/abs/1706.08281v1
{ "authors": [ "Camille Coron", "Clément Calenge", "Christophe Giraud", "Romain Julliard" ], "categories": [ "stat.AP" ], "primary_category": "stat.AP", "published": "20170626084655", "title": "Estimation of species relative abundances and habitat preferences using opportunistic data" }
Roaming across the Castle Tunnels: an Empirical Study of Inter-App Navigation Behaviors of Android Users Ziniu Hu, Yun Ma, Qiaozhu Mei, Jian Tang December 30, 2023 ========================================================================================================Within the mining discipline, mine planning is the component that studies how to transform the information about the ore resources into value for the owner. For open-pit mines, an optimal block scheduling maximizes the discounted value of the extracted blocks (period by period), called the net present value (NPV). However,to be feasible, a mine schedule must respect the slope constraints.The optimal open-pit block scheduling problem () consists, therefore, in finding such an optimal schedule. On the one hand, we introduce the dynamical optimization approach tomine scheduling in the deterministic case, and we propose a class of (suboptimal) adaptive strategies,the so-called index strategies. We show that they provide upper and lower bounds for the , and we provide numerical results. On the other hand, we introduce a theoretical framework forunder uncertainty and learning.Keywords: mine planning, open-pit block scheduling problem, optimization, index strategies, uncertainty, learning.§ INTRODUCTION Within the mining discipline, mine planning is the component that studies how to transform the information about the ore resources into value for the owner. Among the first decisions taken in the mine planning process is the choice of an exploitation method: it can be open-pit, that is achieved by digging from the surface, or it can be underground mining, that is done by constructing shafts and tunnels to access the mineralized zones. Other relevant products of the planning process are the production plan, that indicates how much will be produced at each time period, and the mine scheduling, that backs up the production plan by specifying what parts of the mine will be extracted in order to reach the production. A mine scheduling is constructed by means of a block model, which is a partition of the terrain into a 3-dimensional array of regular blocks.For each block, geostatisticians construct estimations on the different parameters like ore content, density, etc. The block model is considered an input to the mine planning process. The operation of a mine is constrained by the overall capacity of transportation, which is translated into a number of tons per period (for example, a number of tons per day) and therefore in the number of blocks that can be extracted from the mine. Similarly, the overall tonnage of blocks for processing is also bounded by the plant processing capacity. Notice that, in the case of open-pit mines, not all blocks qualify for processing as an important part of the blocks may not contain enough material to have revenue but must be extracted in order to access attractive blocks.Open-pit mines are also “special” in the sense that extraction must respect slope constraints: in order to reach blocks by digging from the surface, there is a minimum set of blocks that have to be extracted before. Indeed, the shape of the pit must be such that the stability of the walls and the accessibility are possible. This translates into a set of precedence constraints between the blocks. Other additional constraints to the operation of the mine may include blending constraints, which limit the average value of processed blocks for a certain attribute (like rock hardness or pollutant contents).Considering all these elements, a mine scheduling can be seen as a (non injective) mapping from the set of blocks towards the time periods. Several blocks share the same extraction time.An optimal block scheduling maximizes the discounted value of the extracted blocks (period by period), called the net present value (). However,to be feasible, a mine schedule must respect the capacity, blending and slope constraints. The optimal open-pit block scheduling problem () consists, therefore, in finding such an optimal block scheduling.Related to block scheduling, and central in this article, is the notion of block sequence. A block sequence is a total order on the set of blocks, such that a larger rank means a later extraction (due to precedence constraints). Block sequences can be easily converted into block schedules by grouping blocks so that the overall capacities and blending constraints are satisfied (or, equivalently, replacing the slope constraints by the precedences given by the sequence).Theis mostly formulated in a deterministic setting, whereall values are supposed to be known to the planner before the planning phase: block model, prices and the operation of the mine (no failure). The traditional approach to optimaluses Binary Integer Programming (see Appendix <ref>). A very general formulation ofis due to Johnson <cit.>, which presented the problem of block scheduling under slope, capacity and blending constraints (the last ones given by ranges of the processed ore grade) within a multi-destination setting (that is, the optimization procedure yields as an output the process to applyto a given block). Unfortunately, the computational capabilities at the time made impossible to solve the formulation of Johnson for realistic case studies. Alternatively to the work of Johnson, Lerchs and Grossman <cit.> proposed a very simplified version ofin which block destinations are fixed in advance and the only constraint considered is the slope constraint, that is, the problem reduces to select a subset of blocks such that the contained value is maximized while the precedence constraint induced by the slope angles are held. This problem is known as the ultimate pit or final pit problem.Lerchs and Grossman also presented two key results:i) an efficient algorithm for solving the ultimate pit problem; ii) reducing the economic value of any given block makes the optimal solution of the ultimate pit problem to shrink (that is, if the values of the blocks decrease, the new solution is a subset of the original one). These two properties allow to produce nested pits and therefore, by trial and error, to introduce time and look for block sequences that satisfy other constraints like capacity.More detailed reviews can be found in <cit.> (for a broad survey on operations research in mining) and<cit.> (for the specific case of open-pit). Finally, an approach closer to the one that will be taken in this article is due to<cit.> which abstract the mine as a set of columns and embed the problem in the context of control theory. Regarding mine planning under uncertainty, since the beginning of the nineties, an increasing number of open-pit mining strategies with uncertainty have been developed, following two articles by Ravenscroft <cit.>, and Denby and Schofield <cit.>. The first one presents the conditional simulation, which is a technique used, for a mine with a known distribution, to generate sets of equally probable profiles called scenarios. We shall not dwell on the issue of the design of statistical modelsof ore distribution with uncertainty, using geostatistical tools such as krigingor others <cit.>, and their simulation.The Denby and Schofield <cit.> paper explains how to include uncertainty in a genetic algorithm, without precisely fixing the probabilistic frame.Since almost two decades, most of the stochastic models are based on the Ravenscroft approach, and present heuristics using a predefined set of scenarios. Dimitrakopoulos has been one of the driving force behind this trend, and has developped a large number of scenario-based strategies <cit.>. The solution is generally searched as a planning, that is an open-loop strategy: we have to plan and apply the entire scheduling without modifying it in the process of extraction, even if we get more information on the profile by discovering the exact value of the blocks. Golamnejad, Osanloo, and Karimi <cit.> and Boland, Dumitrescu and Froyland <cit.> have also developped scenario-based strategies with a well defined mathematical and probabilistic framework: stochastic programming on a scenario tree. This allows solutions to be defined on a tree rather than only on a line (time), which clearly is an improvement. We are interested in how the mine scheduling optimization problem is formulated and possibly solved under uncertainty. We aim at designing solutions as adaptive strategies.The paper is organized as follows, where our objectives are twofold. On the one hand, we introduce in Section <ref> the dynamical optimization approach tomine scheduling in the deterministic case.In Section <ref>, we propose a class of (suboptimal) adaptive strategies to attack the optimal , the so-called index strategies. We show that they provide upper and lower bounds for the . We provide numerical resultsin Section <ref>. On the other hand, we introduce in Section <ref>a theoretical framework forunder uncertainty and learning. § THE DYNAMICAL APPROACH TO OPEN-PIT BLOCK SCHEDULINGAs in <cit.>, we define the mine state as a collection of pit depths at a certain number of surface locations and we represent the evolution of this state via a dynamic model that uses mining action as control input. In this setting, an admissible profile is one that respects local angular constraints at each point, and the open-pit mine optimal scheduling problem consists in finding asequence of blocks and admissible profiles which maximizesthe intertemporal discounted extraction profit.§.§ A state control dynamical modelTo simplify the description of the algorithms in this section, we will identify the blocks by vertical position ∈{1,…,} ( for depth) and by its horizontal position ∈ ( for column).In the sequel, it will also be convenient to see the mine as a collection of columnsof cardinalindexed by , each column containingblocks. We assume that blocks are extracted sequentially under the following hypothesis: * it takes one time unit to extract one block (thus, the time unit is different from the one in Appendix <ref>);* only blocks at the surface may be extracted;* a block cannot be extracted if the slope made with itsneighbors is too high,due to geotechnical constraints on mine wall slopes;* a retirement option is available where no block is extracted. Denote discrete time by t=t_0,…,, where the horizon may be finite or infinite. At time t, the state of the mine is a profile (t)=(_(t))_∈∈ = {1,…,+1}^where _(t)∈{1,…,+1}is the vertical position of the top block withhorizontal position ∈.An admissible profile is one that respects local angular constraints at each point, due to physical requirements.A state =(_)_∈ is said to beadmissible if the geotechnical slope constraints are respected in the sense that_'-_≤ 1 , ∀'∈() , ∈,where () is the set made of columns adjacent to column.Denote by ⊂,the set of admissible states satisfying the above slope constraints (<ref>). Notice that _'-_≤ 1 may be replaced by _'-_≤ k according to slope constraints, or even by non-isotropic local slope constraints. Implicitely, all cuboids have the same dimensions, but we could deal with less regular situations. A decision is the selection of a column in, the top block of which will be extracted. A decision may also be the retirement option, that we shall identify with an additional fictituous column denoted .Thus, a decisionis an element of the set = ∪{}.The relation between columns sequencing and blocks scheduling is explicited in <ref> in the Appendix. At time t, if a column (t) ∈{1,…,}is chosen at the surface of the open-pit mine,the corresponding block is extracted and the profile(t)=(_(t))_∈ becomes _(t+1)= {[ _(t)+1 if=(t); _(t)else. ].In case of retirement option (t)=,then (t+1)=(t) and the profile does not change. In other words, the dynamics is given by(t+1)=((t),(t)) where _(,) = {[_ +1if=∈; _if ≠ or =. ].Indeed, the top block of columnis no longer at depth _(t) but at _(t)+1, while all other top blocks remain. Of course, not all decisions (t)= are possibleeither because there are no blocks left in column (_=+1) or because of slope constraints.When in state ∈, the decision ∈is admissible if the future profile (,) ∈, namely if it satisfies the geotechnical slope constraints. This may easily be transformed into a condition ∈(),where () {∈|(,) ∈}.§.§ Intertemporal profit maximization The open-pit mine optimal scheduling problem consists of finding asequence of admissible blocks which maximizes an intertemporal discounted extraction profit. It is assumed that the value of blocks differs in depth and column because richness of the mine is not uniform among the zones as well as costs of extraction.The profit model states that each block has an economic value(,) ∈, supposed to be known (in the deterministic case). By convention (,)=0when the retirement option is selected. Selecting a column (t) ∈ at the surface of theopen-pit mine, and extracting the corresponding block at depth _(t)(t) yields the value (_(t)(t),(t)).When (t)=, there is no corresponding block and the following notation _(t)(t)=_(t) is meaningless, but this is withoutincidence since the value(_(t),)=0.With a discounting factor function (t) (for instance, (t)=^t, or (t)=^y(t) for a yearly discount, wherey(t)=⌊t/⌋ is the “year" of time tandis the number of blocks extracted per year),the value of a sequence (finite or infinite)(·) ( (t_0), …, () )is given by the criterion( (·) ) ∑_t=t_0^(t)(_(t)(t),(t)) .Finding the value of the mine is solving the optimization problem max{∑_t=t_0^(t) (_(t)(t),(t)) , ( (·), (·) ) ,(t) ∈((t)) },where the maximum is over among all sequences ( (·), (·) ) which satisfy the slope constraints (<ref>).Any such sequence( (·), (·) )such that( (·) ) = is an optimal scheduling sequence. §.§ Dynamic programming equation and the curse of dimensionality Theoretically, the open-pit mine optimal scheduling problem can be solved by dynamic programming <cit.>.It is well known that the dynamic programming approach suffers fromthe curse of dimensionality. Indeed, to give a flavor of the numerical complexity of the problem, the setof acceptable states has a cardinal of order2^10× 3^4 = 82 944 for a cubic 4 × 4 × 4 mine,and of order 2^16× 3^8 × 4 ≈ 1.72 × 10^9 for a cubic mine with 5 lateral blocks (5 × 5 × 5 cuboids).Nevertheless, usual mines can reach more than 10^6 blocks, and the dynamic programming approach will not be usable in practice, without further state reduction.§ INDEX STRATEGIES The dynamic programming equation(t,) = max_∈() ((t) (,_) + ( t+1,(,) ) ) naturally leads to solutions as policies or strategies, where an optimal decisionat time t depends no only on t, but also onthe state (t) <cit.>.In this section, we shall present a class of strategies called index strategies. Among them, the so-called Gittins index strategy plays a special role, in that it easily provides an upper bound to the value of the mine.§.§ Index based policy heuristics We introduce a technique to obtain suboptimal results, based on so-called index strategies. The essence of this method is to model the problem by a set of jobs, each job being characterized by its state of progress, and combined with an index, whose value will indicate the priority of the job. At each time period, we choose the job of higher index to work at, which has the effect of modifying its state of progress, and update its index.In the open-pit mine scheduling problem, the jobs in question will be be the vertical columns located by their surface coordinates, and the state of progress will be the depths of the columns as defined previously. We define an index which, at each column, will map a value generally linked with the worth of the blocks around and below the top block of the column, and that includes or not the slopes constraints.Various indices can be defined, each one giving a different strategy, and therefore different results and running times.Index algorithms with slope admissibility constraints work as follows. For each columnin the block model, and for each local state _ (attached to the column), a certain _(_) ∈ is calculated. Then, for each column, we check whether or not its top block is extractable (in terms of the slope constraints). Among the columns whose top blocks are extractable, we pick the column with highest index and remove its top block, recalculating the index for that column. We iterate in this way until all blocks have been extracted, therefore generating a sequence of blocks.The index of a column can be any function of the block model. We consider the following ones (see Figure <ref> for a few examples). They correspond to existing heuristics that we interpret in terms of index. * The greedy index ^g, that is, the one that uses as index the economic value of the top-most block in the column (that has not been extracted yet). * The Gittins index ^G, that calculates the maximum discounted value of blocks in the column, relative to other columns. Block values are discounted block by block. * The best-cone index ^C^*. This index is similar to the previous one, but calculates all values for the different cones truncated at different depths, selecting the one with highest value. * Toposort ^τ. This is the index attached to the algorithm proposed by <cit.>. To calculate this index, we first solve the linear relaxation of the problem and then set the following value for each block T_i = T + 1-∑_t=1^T t Δit + (T+1) [1-∑_t=1^T it]. Here, it is the binary variable associated to the decision of extracting a block i at time period t and it=∑_s≤ tis (see Appendix <ref> for a detailed formulation).The index then corresponds to the value T_i of the top-most block in the column (that has not been extracted yet).Famous techniques in mining can be interpreted as index strategies. For example, the Greedy index corresponds to a greedy strategy of always picking for extraction the block in the surface that: (a) is extractable (in terms of slope constraints) and (b) has the highest economic value. Furthermore, the Cone index described before is close to the Gershon Algorithm <cit.> whichalso considers the successors' cone, but intersected with the ultimate pit. §.§ An upper bound given by theGittins index strategy We shall now provide upper and lower bounds to the value (<ref>) of the mine by means of index strategies. To each profile =(_)_∈∈ and column ∈, associate thelocal state _∈{1,…,+1}, whichis the vertical position of the top block withhorizontal position . For ∈ ]0,1[,define the Gittins index by^G_(_) sup_τ =t_0, …, +∞∑_s=t_0^τ^s(,_ + s ) /∑_s=t_0^τ^s,where (, )0 when> (this corresponds to fictituous blocks with zero values below the mine). With the notations of <ref>,the Gittins index strategy is defined by (t) ∈max{_(_(t)) , ∈}, (t+1)=((t),(t)) . Suppose that =+∞, andthat the discounting factor function (t) in (<ref>) satisfies 0 ≤(t) ≤^t < 1 .The value (<ref>) of the mine is bounded above as follows≤∑_t=t_0^+∞^t(_(t)(t),(t)) ,where the sequence (·) is given by theGittins index strategy (<ref>) above.A lower bound is given by( ^i(·) ) ≤where the sequence ^i(·) is given byany index strategy respecting slope admissibility constraints^i(t) ∈max{^i_(^i_(t)) , ∈(^i) }, ^i(t+1)=(^i(t),^i(t)) .Recall thatis the maximal value of (<ref>) among all sequences ( (·), (·) ) which satisfy the slope constraints (<ref>). Therefore,is larger than any( (·) ), in particular for a sequence ^i(·) given byan index strategy respecting slope admissibility constraints. This is why (<ref>) holds true.On the other hand, by (<ref>), we have that ≤max{∑_t=t_0^^t(_(t)(t),(t)) , ( (·), (·) ) ,(t) ∈((t)) }.Now, if we relax the slope admissibility constraints (t) ∈((t)), we deduce that ≤max{∑_t=t_0^^t(_(t)(t),(t)) , ( (·), (·) )}.Gittins theorem <cit.> asserts that the optimum for the right hand side is achieved for theGittins index strategy (<ref>). Indeed, the problem is a deterministic multi-armed bandit, with independent arms since the slope admissibility constraints are relaxed, enabling thus to select any column. This is why (<ref>) holds true. Let _opt be respectively the optimal ,_ind thegiven by any index strategy respecting the slopes constraints, and _ub thegiven by the Gittins index without slopes constraints, but with a discounting factor function (t) which satisfies (<ref>). Then we have the following inequality:_ind≤_opt≤_ub. § NUMERICAL EXAMPLES In this section, we present and discuss numerical results obtained usingindex heuristics over a set of synthetic data and the Marvin block model. §.§ The Marvin dataset The mine considered for this study is a well known mine named Marvin, which is available for use within the mine planning optimization Whittle from GenCom software. The overall number of blocks in Marvin is about 53,000.The block model contains the following data: block coordinates (x,y and z), copper and gold grades (cooper_i and gold_i respectively) and density. From these attributes we calculate: a block tonnage w_i (the product of the density by the volume of the block) and the copper content (the tonnage of the block by its copper grade). We aim to maximize overall copper production under a transportation capacity of 30,000 tons per day.Finally, we consider annual time-periods with a yearly discount rate equivalent to a 10% opportunity cost, hence a yearly discount factor = 1/1 + 0.1.§.§ Using block sequences to obtain blocks schedulingsFirst, we present how to transform the output of an indexing strategy into a block scheduling and, therefore, a solution of . We regard the output of an indexing algorithm as a sequence of blocks: a block sequence is a tuple of blocks S=(i_1, i_2, …, i_N) that is compatible with the precedence constraints. A sequence S can be converted into a solution of the open-pit block sequencing problem with capacity constraints, by creating nested pits that extract the blocks in the order given by the sequence. More precisely, let us say that P ⊂ is capacity-feasible at time period t if for each resource r, we have that∑_i ∈ P a(i,r) ≤ C^+_r,t.We can then follow the next procedure to construct a block scheduling: * Set k=1, t=1, P_0 = P_1 = ∅.* While t ≤ T:* While k<N and (P_t ∪{i_k}) ∖ P_t-1 is capacity-feasible at time period t: P_t ← P_t ∪{i_k}, k ← k+1. * t ← t+1.Notice, however, that there may exist some room for improvement on the obtained block scheduling, as it could happen that the blocks assigned to the very last time-period have a negative overall value. If this is the case, we reset these blocks as unextracted.An alternative way to convert a block sequence into a block scheduling is the following. Given the sequence S=(i_1, i_2, …, i_K), we set = {i_k : k=1, 2, …, K} and = { (i_k, i_k+1) : k = 1, 2, …, K-1} and then directly solve the instance (,,V,A,T,,C^+,C^-). This is equivalent to the procedure described above with the last “cleaning” phase.§.§ Results and discussion We now present the different results obtained for the heuristics and data sets, and we comment the findings of the numerical experiences.Numerical experiences were run with an Intel Pentium Dual Core, 2.8 Ghz processor running Linux 2.6.30-1. LP's were solved using the GNU Linear Programming Toolkit (GLPK) using the primal simplex method.Results in running time and economic value () are presented in Table <ref>. We observe that, while TopoSort obtains better results (closer to the LP upper bounds), this approach does not “scale” well, as it does not produce feasible solutions for the Marvin instances. Indeed, the main difficulty in this case is to solve the Linear Relaxation (LP), which did not end within reasonable time (12 hours).Conversely, the index strategies provide mixed results for the bounds, but the execution time is quite small, making them good candidate for fast schedulers and therefore useable with uncertainty scenarios, for example, on the grades.We observe that there is a lot of room to improve the speed of the heuristics by optimizing the code or, for example, parallelizing some of the computations.§ A MATHEMATICAL FRAMEWORK FOR MINE SCHEDULING UNDER UNCERTAINTYWe present here a general probabilistic framework for the , that allows a dynamical use of information (learning), permitting to develop adaptive strategies, and which includes the planning solutions as a particular case.The approach is mostly mathematical and formal. However, in the last part, we suggest possible heuristics for future research. §.§ Block attributes Denote discrete time by t=t_0,…,, where the horizon is supposed to be finite for simplicity. Denote bythe set of all blocks.Each block ∈, when extracted in period t, is characterized by a -vector of attributes _(t)∈=^. These attributes can for instance be the rock and ore volumes, price, cost, etc. In the deterministic model, these values will be simple real numbers perfectly known, but in our case it will be an uncertain vector. This uncertain vector _(t) will summarize various sources of uncertainty, and will be the basis of the construction of the worth _(t) of blockat time t.It can for instance be of the following form, if the mine contains d different ores,_b(t)=((t),(),(,t),…) _b(t)=(t)·()-(,t) ,where (t)∈^d is an uncertain vector representing the selling prices per unit of the d different ores at time t, ()∈^d is an uncertain vector representing the amount of each ore in the block , and (,t) is a uncertain variable representing the extraction cost of the blockat time t, each of them being coordinates of the attributes vector _(t). This formulation presents the advantage to split the price ditribution modeling and the distribution of the different ores in the mine; it is of course a simple instance that can be replaced by more sophisticated models including processing costs or other geotechnical data. §.§ Scenarios In the sequel, we will use the following notations(t):=(_(t))_∈for the collection of the attributes of the mine blocks at a time period t. while A sequence(·):=((t_0),,())is called a scenario and belongs to the product setΩ:=∏_t=t_0^∏_∈ = ^·(-t_0+1)·,which is the set of all possible scenarios. The situation where Ω is a singleton (a unique scenario) corresponds to the deterministic case. §.§ A priori information data on the scenarios Additional a priori information on the scenarios is generally giveneither by probabilistic or by set membership settings.§.§.§ Stochastic assumptions Notice that the vectors _(t) are a priori not independent, neither with respect to(spatially), nor with respect to t (temporally). Indeed, the price of raw materials is highly correlated in time, and a strong spatial correlation exists in the repartition of the ore. Many models of the orebody are based on the notion of variogram, which is a geostatistical tool giving an index of the spatial correlation of a certain type of ore. It gives a represention of the typology of the ore in a site, some metals as gold tending to aggregate into nuggets (with a strong short-distance correlation but a lower long-distance one), whereas other like copper will have a more long-distance dependence. It opens the way to orebody modeling such as kriging, a widespread interpolation method in geostatistics.In the probabilistic formalism, the set Ω of all scenarios is equipped with the Borel σ-field F=^·(-t_0+1)·.The _(t) becomes random vectors, and the orebody is represented by a joint distribution law ℒ(_(t),∈,t∈[t_0,...,]) ,which is a probability on ( Ω, F ). For instance, in the case of a unique type of ore, we can model (())_∈ by a Gaussian vector of size , characterized by its mean vector μ=([()])_∈ and its covariance matrix Σ=(Cov((),(')))_,'∈, with constant price (t)= and cost (,t)=. The set of the worths _(t),∈, t∈[t_0,...,], is then a Gaussian vector of size ·(-t_0+1) whose mean vector and covariance matrix can be calculated by means of μ and Σ. §.§.§ Set membershipFor a given blockand a given time period t, _(t) can take its value in a certain set 𝕊(,t)⊂^, which depends on the model. In the most general case, if we know nothing about the mine, 𝕊(,t) will be ^, but it can for instance be reduced to intervals or even to a finite number of values, or to a singleton in a deterministic model. §.§ Decisions and constraints Each period of time (year, for instance), we can extract a certain number of blocks, and therefore we model our decision by a variable (t)∈ = 2^, corresponding to the blocks removed at time t∈[t_0,...,], which form a subset of .Here, 2^ denotes the set of subsets of(the power setof ).Sinceis a finite set, we equip it with the completeσ-field U=2^.We introduce the notations:^t ( (t_0),,(t) ) and(·) ((t_0),,()) .The set Ω×^-t_0+1 is called the history space.Elements of the set _t Ω×^t-t_0+1 represent history up to time t.To capture slope and uncertain capacity constraints, we can restrict decisions as belonging to a subset(t,(·),^t-1) ofas follows:(t) ∈(t,(·),^t-1) . §.§ On-line information After having seen a priori information data on the scenarios, we now turn toon-line information available for the planner at time t. In essence, it is built upon the attributes (_b(t))_,t we have discovered, and thus it a priori also depends on the past extractions ^t-1 (i.e. the choices done on [t_0,...,t-1]). Mathematically, we shall represent information at time t as a σ-algebra _t on the history space Ω×^-t_0+1.* The blind information pattern is _t={,∅}⊗{^-t_0 +1,∅},where the decision-maker cannot distinguish elements in the history space (he cannot even recall his past decisions).* The anticipative point of view corresponds to a stationary and constant_t = F ⊗{,∅}.The decision-maker knows the attributes of each block at each time, and knows them in advance: he is a visionary decision-maker.A visionary decision-maker having recall of his past decisions would be modeled as _t = F⊗⊗_s=t_0^t-1U.* A causal information pattern is one in which the decision-maker cannot base his decision at time t upon his future decisions, and it is represented by the condition_t ⊂F⊗⊗_s=t_0^t-1U. * In the cumulative information pattern,let us denote by(t,u^t-1) ∪_s=t_0^t(t)⊂the set of the blocks which have been removed at time t following the sequence u^t-1 of decisions in the periods [t_0,...,t-1]. If we assume that, each time we extract a blockat period t, we learn the exact value of the uncertainty _b(t), we define the information as_t =σ{ (_(s),^s-1) , ∈(^s-1,s), s∈[t_0,...,t-1] },where we have abusively identified (_(s),^s-1)with the coordinate random variable on the history space Ω×^-t_0+1.This formulation is adapted to a dynamical strategy, in which we learn step-by-step the information depending on our past choices.§.§ Adaptive strategies We now have the tools to define strategies adapted to on-line information. We assume that the information pattern is causal, that is, satisfies (<ref>). A (causal) strategy is a sequence = ( _t )_t = t_0, …, of policies_t : Ω×^t-t_0→such that, for all t = t_0, …,, _t is measurable with respect to _t.Once a strategyand a scenario (·) are given, decisions are inductively deduced by(t)=_t((·), ^t-1) .Now, strategies will be our optimization variables.If the family of sets (t,(·),^t-1) in (<ref>) is measurable with respect to _t, we may restrict ourselves tostrategies in the admissible setof the policies compatible with the constraints (capacity constraints, slopes constraints, etc.). For instance, a capacity constraints of k blocks per time unit will imply that, for ∈, the (t) generated bywill not be more than k, or for a certain type of slopes constraints and precedence extraction relations, that the decisions (t) generated bywill be compatible with the constraints induced by the blocks (t,^t-1) already removed.A strategy ∈ is said to be an open-loop strategy if _t is a constant mapping for all t. In other words, an open-loop strategy plans the entire extraction sequence before starting it, and does not modify the sequence even if one gets information over time. In the more general case in whichdepends on the information, the strategy is said to be a closed-loop strategy. It corresponds to the adaptive case. §.§ Decision criteria under uncertainty For a given scenario (·) anda given control sequence (·),the sum of discounted profits (NPV)is given by ((·),(·)) = ∑_t=t_0^(t)∑_∈(t)_(t).For a given scenario (·) anda given strategy(adapted to the information pattern _t, t=t_0, …,), let us put^( (·) ) ((·),(·))where (t)=_t((·), ^t-1) .Now, contrarily to deterministic optimization, we do not know in advance thescenario (·). How the decision-maker aggregates (<ref>) with respect to the uncertainties, before optimizing,reflects his sensitivity to risk. The most common aggregates are the robust (or worst-case) and the expected criteria, but we also present other examples.* §.§.§ The expected criterion The expected optimization problem ismax_∈^[^( (·) ) ] ,where ^ denotes the mathematical expectation with respect to a probability =ℒ(_(t),∈,t∈[t_0,...,]) on the space Ω of scenarios. This formulation aims to maximize the mean NPV, that is the average value of all possibilities, weighted by their probability to happen. It is the best formulation you can choose in terms of average gain, but it does not penalize the possible realizations of the worst cases. * §.§.§ The robust criterion The robust optimization problem ismax_∈min_(·)∈Ω^( (·) ).The strategy given by this formulation ensures to maximize the NPV if the worst case happens. * §.§.§ The multi-prior approach Suppose that the space Ω of scenarios is equipped with different probabilitiesin a set 𝔓, reflecting ambiguity with respect to the stochastic model.The multiprior approach is a combination of the robust and the expected criteria by taking the worst belief in term of expected NPV:max_∈min_∈𝔓^[^( (·) ) ] .* §.§.§ An expected criteria under probability constraint This last formulation is similar to the maximization of the expected NPV, but with an additional constraint to handle the risk. Given two parameters α∈ and p∈ [0,1], the expected optimization problem under probability constraint ismax_∈^[^( (·) ) ]under the restriction that [^( (·) ) ≤α]≤ p .The meaning of this formulation is to maximize the expected profit, with the condition that the chosen strategy will give, with high probability 1-p, at least a certain gain α.Risk measures (Value-at-Risk, Conditional Value-at-Risk, etc.) could also be taken for aggregation <cit.>.§.§ From planning towards adaptive solutions As we have seen, since the nineties, a certain number of “scenario-based strategies” have been proposed in the literature. The common denominator of these approaches is the use of conditional simulation (or any other simulation method), using the distribution law of the orebody, to generate a set of representative scenarios of the mine. Then, the solution is generally searched as a planning, that is an open-loop strategy.A schematic way to represent the elaboration of a scenario-based strategy is the followingℒ (_j(·))_j∈𝒥(·) ,that is, we sample the distribution law to obtain a set 𝒥 of scenarios. Then, with one or another method, we use these scenarios to elaborate an open-loop decision sequence (·). We suggest that this approach may be extended in the spirit of the open-loop with feedback control (OLFC) <cit.>.We do not detail the mathematics, but simply sketch the method. In the probabilistic setting, we assume that the arrival of an observation at time t allows us to update the conditional distribution ℒ^t onthe space Ω of scenarios, knowing past observations. Then, the sketch isℒ^0(_j(·))_j∈𝒥^1 (^1(1),…,^1()) ^1(1) ↪ℒ^1(_j(·))_j∈𝒥^2 (^1(1),^2(2),…,^2())^2(2) ⋯ ↪ℒ^-1 (_j(·))_j∈𝒥^ (^1(1),…,^()) ^()returning a closed-loop strategy (·).To end this section, let us stress the fact that index methods are well adapted to the uncertain case, where the index may be a function ofthe conditional distribution ℒ^t. § CONCLUSIONSWe have presented the dynamic optimization approach to the open-pit block scheduling problem, a relevant problem in the mining industry that remains ellusive to be solved due to its size.We have proposed heuristics based on so-called index strategies, together with upper and lower bounds for the NPV. Some of the results are promising, and index strategies are very fast and scale well for large instances of mines. This encourages their use when one generates a large number of scenarios, for which case a fast planning simulation and NPV calculation is crucial. In the future, we expect to do more experimentation on larger case studies and other (more realistic) data sets, and to compare the results with others found in the literature.We have also introduced a general framework to deal with uncertainty and dynamical learning. We expect to implement this framework and to test it against real data.*Acknowledgments.The authors thank the STIC-AmSud OVIMINE project for the financial support. This paper was exposed at several OVIMINE meetings — 2011, March 17-18, Lima, Perú, 2011, September 2-19, Paris, France, 2011, October 6-8, Lima, Perú, 2012, January 9, Valparaiso, Chile, 2012, November 8, Santiago, Chile — and we thank the participants for their comments.#1#128 urlstyle[Alvarez et al.(2011)Alvarez, Amaya, Griewank, and Strogies]AAG+11 Felipe Alvarez, Jorge Amaya, Andreas Griewank, and Nikolai Strogies. A continuous framework for open pit mine planning. Mathematical Methods of Operations Research, 730 (1):0 29–54, 2011.[Bellman(1957)]Bellman:1957 R. E. Bellman. Dynamic Programming. Princeton University Press, Princeton, N.J., 1957.[Bertsekas(2000)]Bertsekas:2000 D. P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, Belmont, Massachusets, second edition, 2000. Volumes 1 and 2.[Bienstock and Zuckerberg(2010)]BZ10 Daniel Bienstock and Mark Zuckerberg. Solving LP relaxations of large-scale precedence constrained problems. In IPCO, pages 1–14, 2010.[Boland et al.(2008)Boland, Dumitrescu, and Froyland]BDF08 N. Boland, I. Dumitrescu, and G. Froyland. A multistage stochastic programming approach to open pit mining production scheduling with uncertain geology. Draft paper, 2008.[Chicoisne et al.(2011)Chicoisne, Espinoza, Goycoolea, Moreno, and Rubio]CEG+11 Renaud Chicoisne, Daniel Espinoza, Marcos Goycoolea, Eduardo Moreno, and Enrique Rubio. A new algorithm for the open-pit mine scheduling problem. October 2011. Draft article.[De Lara and Doyen(2008)]DeLara-Doyen:2008 M. De Lara and L. Doyen. Sustainable Management of Natural Resources. Mathematical Models and Methods. Springer-Verlag, Berlin, 2008.[Denby and Schofield(1995)]DS95 B. Denby and D. Schofield. Inclusion of risk assessment in open pit design and scheduling. Transactions of the IMM, Section A: Mining Industry, 1995.[Dimitrakopoulos and Ramazan(2004)]DR04 R. Dimitrakopoulos and S. Ramazan. Uncertainty-based production scheduling in open-pit mining. Transactions of the SME, 2004.[Dimitrakopoulos and Ramazan(2008)]DR08 R. Dimitrakopoulos and S. Ramazan. Stochastic integer programming for optimizing long-term production schedules of open pit mines: Methods, application and value of stochastic solutions. Transactions of the IMM, Section A: Mining Industry, 2008.[Dimitrakopoulos et al.(2007)Dimitrakopoulos, Martinez, and Ramazan]DMR07 R. Dimitrakopoulos, L. Martinez, and S. Ramazan. A maximum upside/minimum downside approach to the traditional optimization of open pit mine design. Journal of Mining Science, 2007.[Dowd(1989)]Dowd89 P.A. Dowd. Some observations on confidence intervals and kriging errors. Geostatistics, Kluwer Academic Publishers, 1989.[Föllmer and Schied(2002)]Follmer-Schied:2002 H. Föllmer and A. Schied. Stochastic Finance. An Introduction in Discrete Time. Walter de Gruyter, Berlin, 2002.[Gershon(1987)]Ger1987M. Gershon. Heuristic approaches for mine planning and production scheduling Geotechnical and Geological Engineering, 50: 1–13, 1987.[Gittins(1979)]Gittins:1979 J. C. Gittins. Bandit processes and dynamic allocation indices. Journal of the Royal Statistical Society. Series B, 410 (2):0 148–177, 1979.[Godoy and Dimitrakopoulos(2004)]GD04 M. Godoy and R. Dimitrakopoulos. Managing risk and waste mining in long-term production scheduling of open-pit mine. Transactions of the SME, 2004.[Golamnejad et al.(2006)Golamnejad, Osanloo, and Karimi]GOK06 J. Golamnejad, M. Osanloo, and B. Karimi. A chance-contrained programming approach for open pit long-term production scheduling in stochastic environments. The Journal of the South African Institute of Mining and Metallurgy, 2006.[Goodwin et al.(2006)Goodwin, Seron, Middleton, Zhang, Hennessy, Stone, and Menabde]Goodwin-Seron-Middleton-Zhang-Hennessy-Stone-Menabde:2006 G.C. Goodwin, M.M. Seron, R.H. Middleton, M. Zhang, B.F. Hennessy, P.M. Stone, and M. Menabde. Receding horizon control applied to optimal mine planning. Automatica, 420 (8):0 1337–1342, 2006.[Hochbaum and Chen(2000)]HoCh00 Dorit Hochbaum and A. Chen. Performance analysis and best implementation of old and new algorithms for the open-pit mining problem. Operations Reserach, 48:0 894–914, 2000.[Johnson(1968)]Joh68 T. B. Johnson. Optimum open-pit mine production scheduling. PhD thesis, 1968.[Johnson(1969)]Joh69 T. B. Johnson. Optimum open-pit mine production scheduling. 1969.[Journel(1983)]Jour83 A.G. Journel. Non-parametric estimation of spatial distributions. Journal of the International Association of Mathematical Geology, 1983.[Krige(1984)]Krige84 D.G. Krige. Geostatistics and the definition of uncertainty. Transactions of the IMM, Section A: Mining Industry, 1984.[Lajaunie(1990)]Laj90 C. Lajaunie. Comparing some approximate methods for building local confidence intervals for predicting regionalised variables. Mathematical Geology, 1990.[Lerchs and Grossman(1965)]LeGr65 H. Lerchs and H. C. Grossman. Optimal design of open-pit mines. Transactions C.I.M., 58:0 47–54, 1965.[Matheron(1975)]Mat75 Georges Matheron. Paramétrage de contours optimaux. Note Geostatistics, 128, 1975.[Newman et al.(2010)Newman, Rubio, Caro, Weintraub, and Eurek]NRW+10 Alexandra M. Newman, Enrique Rubio, Rodrigo Caro, Andres Weintraub, and Kelly Eurek. A review of operations research in mine planning. Interfaces, 400 (3):0 222–245, 2010.[Ravenscroft(1992)]Raven92 P.J. Ravenscroft. Risk analysis for mine scheduling by stochastic simulation. Transactions of the IMM, Section A: Mining Industry, 1992.[Sichel et al.(1995)Sichel, Dohm, and Kleingeld]Sichel95 H.S. Sichel, C.E. Dohm, and W.J. Kleingeld. New generalized model of observed ore value distributions. Transactions of the IMM, Section A: Mining Industry, 1995.[Whittle(1982)]Whittle:1982 P. Whittle. Optimization over Time: Dynamic Programming and Stochastic Control, volume 1. John Wiley & Sons, New York, 1982.§ INTEGER LINEAR PROGRAMMING FORMULATION OF THE OPEN-PIT BLOCK SCHEDULING PROBLEMIn this section we introduce the relevant notation and formulation for the deterministic case of the open-pit block scheduling problem using binary linear programming. §.§ Modeling and notation We considerthe set of all blocks and N=||. We denote the elements of(the blocks) with indices i, j, unless otherwise stated. Similarly, we consider T ∈ time-periods and denote individual time-periods with s, t = 1, 2, …, T. T is called the time horizon. We also use the notation T = {1, 2, …, T} for the set of time-periods.Slope constraints are modeled as precedence constraints and encoded as a set of arcs ⊂ B × B, so (i,j) ∈ means that Block j has to be extracted before Block i. We say, in this case, that Block j is a predecessor of Block i, which in turn is a successor of j. Notice that arc (i,j) goes from the successor to the predecessor.In this work we address a simplified version of the problem in which the decision of the destination of the block is done beforehand. This allows us to* consider that the net profit (which can be negative) of processing Block i is already known and noted as v_i ∈, and * define a set of resources , and for Block i ∈ and Resource r ∈ the quantity a(i,r) of resource r that is used when i is processed. For each time period t, upper and lower bounds on the consumption of resource r are given by the quantities C^-_rt∈{-∞}∪ and C^+_rt∈{+∞}∪, respectively.We also assume that the block is processed in the same time period in which it is extracted from the mine (that is, we do not allow to stock material for future processing). We also assume, as is usual in these models, that all block extraction, handling and processing is done within a time-period length.While the modeling can be easily extended to the general case, the heuristics presented in this article do not always work to the case in which blending constraints apply, therefore, we assume there are not such constraints.Finally, Table <ref> summarizes the notation introduced in this Appendix.A block scheduling is a function τ : →{1, 2, …, T, ∞} where τ(i) is the time-period in which block i is extracted, hence, a block scheduling must satisfy the precedence constraints, that is if (i,j) ∈ then τ(i) ≥τ(j).If τ is a block scheduling then the preimage sets P_1 = τ^-1(1) and P_t= P_t-1∪τ^-1(1) for t>1 are called pits. We observe that P_t ⊂ P_t+1 hence we say that the pits are nested.A block sequence is a tuple s=(s_1, s_2, …, s_K) ∈^Ksuch that k ≠ℓ⇒ s_k ≠ s_ℓ (all blocks in the tuple are different) and that is compatible with the precedence constraints, that is if (s_k,s_ℓ)=(i,j) ∈ then ℓ > k(predecessors appear before in the sequence). §.§ The binary programming formulationThe open-pit block scheduling problem is defined on the following variables. For each i ∈, t=1, 2, …, T: it = {[1 block i is extracted by time-period t,;0 otherwise. ].Notice that the interpretation of variable it is by time-period, that is it = 1 if and only if block i has been extracted (and processed) at some period s with 1 ≤ s ≤ t. For this reason, it is also useful to introduce the following auxiliary variables for any i∈: i1 = i1, and it = it - i,t-1 for t=2,3,…,T. We have that it = ∑_s=1^t is and it = 1 if and only if block i is extracted exactly at time period t.The optimization program is the following:() max ∑_t=1^T ^t ∑_i=1^N v_i itit ≤jt(∀(i,j) ∈)(∀t ∈) i,t-1 ≤it (∀i ∈)(∀t=2,…,T) ∑_i a(i,r) it ≤C^+_rt (∀r ∈) (∀t ∈) ∑_i a(i,r) it ≥C^-_rt (∀r ∈) (∀t ∈) it ∈{0,1}(∀i ∈) (∀t ∈) .Equation (<ref>) presents the goal function, which is the discounted value of extracted blocks over the time horizon T. Equation (<ref>) corresponds to the precedence constraints given by the slope angle. Equation (<ref>) states that blocks can be extracted only once. Finally, Equations (<ref>) and (<ref>) fix the resource consumption limits.For a block model , precedence arcs , block values V = (v_i)_i ∈ and attribute matrix A=(a(i,r))_i∈,r∈ we will use the notation (,,v,A,T,,C^+,C^-) to denote an instance of the open-pit block scheduling problem for a certain time horizon T, discount rate , and resource limit matrices C^- = (C^-_r,t)_r,t and C^+ = (C^+_r,t)_r,t. We will omit some of the parameters if the context allows it.
http://arxiv.org/abs/1706.08264v1
{ "authors": [ "Michel De Lara", "Nelson Morales", "Nathanaël Beeker" ], "categories": [ "math.OC" ], "primary_category": "math.OC", "published": "20170626075139", "title": "Adaptive Strategies for The Open-Pit Mine Optimal Scheduling Problem" }
[email protected] ^1 Physics Department, University of Michigan, 450 Church Street, Ann Arbor, MI 48109-2122, USA ^2 Applied Physics Program, University of Michigan, 450 Church Street, Ann Arbor, MI 48109-1040, USA ^3 Microelectronics Research Center, Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas 78758, United States Semiconductor microcavity polaritons, formed via strong exciton-photon coupling, provide a quantum many-body system on a chip, featuring rich physics phenomena for better photonic technology. However, conventional polariton cavities are bulky, difficult to integrate, and inflexible for mode control, especially for room temperature materials.Here we demonstrate sub-wavelength thick one-dimensional photonic crystals (PCs) as a designable, compact and practical platform for strong coupling with atomically thin van der Waals Crystals (vdWCs). Polariton dispersions and mode anti-crossings are measured up to room temperature. Non-radiative decay to dark excitons was suppressed due to polariton enhancement of the radiative decay.Unusual features, including highly anisotropic dispersions and adjustable Fano resonances in reflectance, may facilitate high temperature polariton condensation in variable dimensions. Combining slab PCs and vdWCs in the strong coupling regime allows unprecedented engineering flexibility for exploring novel polariton phenomena and device concepts.Valid PACS appear here Photonic-Crystal Exciton-Polaritons in Monolayer Semiconductors Hui Deng^1,2 June 23, 2017 ===============================================================§ INTRODUCTIONControl of light-matter interactions is elementary to the development of photonic devices. Existing photonic technologies are based on weakly coupled matter-lightsystems, where the optical structure perturbatively modifies the electronic properties of the active media. As the matter-light interaction becomes stronger and no longer perturbative, light and matter couple to form hybrid quasi-particles – polaritons.In particular, quantum-well (QW) microcavity exciton polaritons feature simultaneously strong excitonic nonlinearity, robust photon-like coherence, and a meta-stable ground state, providing a fertile ground for quantum many-body physics phenomena <cit.> that promise new photonic technology <cit.>. Numerous novel types of many-body quantum states with polaritons and polariton quantum technologies have been conceived, such as topological polaritons <cit.>, polariton neurons <cit.>, non-classical state generators <cit.>, and quantum simulators <cit.>. Their implementation require confined and coupled polariton systems with engineered properties, which, on one hand can be created by engineering the optical component of the strongly coupled modes, on the other hand, is difficult experimentally using conventional polariton systems.Conventional polariton system are based on vertical FP cavities made of thick stacks of planar, distributed Bragg reflectors (DBRs), which have no free design parameter for mode-engineering and are relatively rigid and bulky against post-processing. Different cavity structures have been challenging to implement for polariton systems as conventional materials are sensitive to free surfaces and lattice mismatch with embedding crystals.The recently emerged two-dimensional (2D) semiconductor vdWCs <cit.> are uniquely compatible with diverse substrate without lattice matching <cit.>. However, most studies of vdWC-polaritons so far continue to use FP cavities <cit.>, which are even more limiting for vdWCs than for conventional materials. This is becausemonolayer-thick vdWCs need to be sandwiched in between separately fabricated DBR stacks and positioned very close to the cavity-field maximum. The process is complex, hard to control, and may change or degrade the optical properties of vdWCs <cit.>.Alternatively, metal mirrors and plasmonic structures have been implemented <cit.>. They are more compact and flexible, but suffer from intrinsically large absorption loss and poor dipole-overlap between the exciton and field <cit.>.Here we demonstrate sub-wavelength thick, one-dimensional dielectric PCs as a readily designable platform for strong-coupling, which is also ultra-compact, practical, and especially well suited to the atomically-thin vdWCs. Pristine vdWCs can be directly laid on top of the PC without further processing. Properties of the optical modes, and in turn the polariton modes, can be modified with different designs of the PC. We confirm polariton modes up to room temperature by measuring the polariton dispersions and mode anti-crossing in both reflectance and photoluminescence (PL) spectra. Strongly suppressed non-radiative decay to dark excitons due to the polaritonic enhancement was observed. We show that these polaritons have anisotropic polariton dispersions and adjustable reflectance, suggesting greater flexibility in controlling the excitations in the system to reaching vdWC-polariton condensation at lower densities in variable dimensions. Extension to more elaborate PC designs and 2D PCs will facilitate research on polariton physics and devices beyond 2D condensates. § RESULTS §.§ The systemWe use two kinds of transition metal dichalcogenides (TMDs) as the active media: a monolayer of tungsten diselenide (WSe_2) or a monolayer of tungsten disulfide (WS_2). The monolayers are placed over a PC made of a silicon-nitride (SiN) grating, as illustrated in <ref>a. The total thickness of the grating t is around 100 nm, much shorter than half a wavelength, making the structure an attractive candidate for compact, integrated polaritonics. In comparison, typical dielectric FP cavity structures are many tens of wavelengths in size. A schematic and scanning electron microscopy (SEM) images of the TMD-PC polariton device are shown in <ref>. More details of the structure and its fabrication are described in Methods. Since the grating is anisotropic in-plane, its modes are sensitive to both the propagation and polarization directions of the field. As illustrated in <ref>a, we define the direction along the grating bars as the x-direction, across the grating bars as the y-direction, and perpendicular to the grating plane as the z-direction. For the polarization, along the grating corresponds to transverse-electric (TE), and across the bar, transverse-magnetic (TM). The TM-polarized modes are far off resonance with the exciton. Hence TM excitons remain in the weak coupling regime, which provides a direct reference for the energies of the un-coupled exciton mode. We focus on the TE-polarized PC modes in the main text and discuss the TM measurements in the Supplementary Figure S1.§.§ WSe_2-PC polaritonsWe first characterize a monolayer WSe_2-PC device at 10 K. The energy-momentum mode structures are measured via angle-resolved micro-reflectance (<ref>a-b)and micro-PL (<ref>c) spectroscopy, in both the along-bar (top row) and across-bar (bottom row) directions.The data (left panels) are compared with numerical simulations (right panels), done with rigorous coupled wave analysis (RCWA).Without the monolayer, a clear and sharp PC mode is measured with a highly anisotropic dispersion (<ref>a, left panels) and is well reproduced by simulation (<ref>a, right panels). The broad low-reflectance band in the background is an FP resonance formed by the SiO_2 capping layer and the substrate. The PC mode half linewidth is γ_cav=6.5 meV. This corresponds to a quality factor Q or finesse of about 270, much higher than most TMD-cavities <cit.> and comparable to the best DBR-DBR ones <cit.>. With a WSe_2 monolayer laid on top of the PC (<ref>c), two modes that anti-cross are clearly seen in both the reflectance and PL spectra (<ref>b-c) and match very well with simulations, suggesting strong coupling between WSe_2 exciton and PC modes. Strong anisotropy of the dispersion is evident comparing E_LP,UP(k_x, k_y=0) (top row) and E_LP,UP(k_x=0, k_y) (bottom row), resulting fromthe anisotropic dispersion of the PC modes.Correspondingly, the effective mass and group velocity of the polaritons are also highly anisotropic, which provide new degrees of freedom to verify polariton condensation and to control its dynamics and transport properties <cit.>.To confirm strong-coupling, we fit the measured dispersion with that of coupled modes, and we compare the coupling strength and Rabi-splitting obtained from the fitting with the exciton and photon linewidth. In the strong coupling regime, the eigen-energies of the polariton modes E_LP,UP at given in-plane wavenumber k_∥ and the corresponding vacuum Rabi splitting 2ħΩ are given by:E_LP,UP =1/2[ E_exc+E_cav+i( γ_cav+γ_exc )/2 ]± √(g^2+1/4[E_exc-E_cav+i(γ_cav-γ_exc)]^2), 2ħΩ=2√(g^2-(γ_cav-γ_exc(T))^2/4).Here E_exc is the exciton energy, γ_exc and γ_cav are the half-widths of the un-coupled exciton and PC resonances, respectively, and g is the exciton-photon coupling strength. A non-vanishing Rabi splitting 2ħΩ requiresg>|γ_exc - γ_cav|/2; but this is insufficient for strong coupling. For the two resonances to be spectrally separable, the minimum mode-splitting needs to be greater than the sum of the half linewidths of the modes:2ħΩ > γ_cav+γ_exc, or,g >√((γ_exc^2 + γ_cav^2)/2). In frequency domain, <ref> corresponds to requiring coherent, reversible energy transfer between the exciton and photon mode.We first fit our measured PL spectra to obtain the mode dispersion E_LP,UP(k_x,y), as shown by the symbols in <ref>d. We then fit E_LP,UP(k_x,y) with (<ref>), with g and E_cav(k_x,y=0) as the only fitting parameters. The exciton energy E_exc and half-width γ_exc are measured from the TM-polarized exciton PL from the same device, while the wavenumber dependence of E_cav and γ_cav are measured from the reflectance spectrum of the bare PC (Supplementary Figure S2b). We obtain g=8.9± 0.23 meV and 7.5±0.87 meV for dispersions along k_x and k_y, respectively, corresponding to a Rabi splitting of 2ħΩ∼ 17.6 meV and 14.9 meV. In comparison, γ_exc=5.7 meV and γ_cav=3.25 meV. Therefore g is much greater than not only (γ_exc-γ_cav)/2=1.2 meVbut also √((γ_exc^2+γ_cav^2)/2)=4.6 meV,which confirms the system is well into the strong coupling regime. §.§ Temperature dependence of WSe_2-PC polaritons At elevated temperatures, increased phonon scattering leads to faster exciton dephasing, which drives the system into the weak-coupling regime. We characterize this transition by the temperature dependence of theWSe_2-PC system; we also show the effect of strong coupling on exciton quantum yield.We measure independently the temperature dependence of the uncoupled excitons via TM exciton PL, the uncoupled PC modes via reflectance from the bare PC, and the coupled modes via PL from the WSe2-PC device. We show in <ref>a the results obtained for k_x=3.1 μ m ^-1, k_y=0 μ m ^-1 as an example. For the uncoupled excitons, with increasing T, the resonance energy E_exc(T) decreases due to bandgap reduction <cit.>, as shown in <ref>a, while the linewidth 2γ_exc broadens due to phonon dephasing <cit.>, as shown in <ref>b. Both results are very well fitted by models for conventional semiconductors (see more details in Methods). For the uncoupled PC modes, the energy E_cav=1.74 eV and half-linewidth γ_cav=6.5 meV change negligibly (Supplementary Figure S2c). The exciton and PC-photon resonances cross, as shown in <ref>a at around 50 K.In contrast, the modes from the WSe_2-PC device anti cross between 10-100 K and clearly split from the uncoupled modes, suggesting strong-coupling up to 100 K. Above 130 K, it becomes difficult to distinguish the modes from WSe_2-PC device and the uncoupled exciton and photon modes, suggesting the transition to the weak-coupling regime.We compare quantitatively in <ref>b the coupling strength g with √((γ_exc(T)^2+γ_cav^2)/2) and(γ_exc-γ_cav)/2 to check the criterion given in  <ref>. The strong coupling regime persists up to about 110 K, above which, due to the increase of the exciton linewidth, g(T) drops to below √((γ_exc(T)^2+γ_cav^2)/2) and the system transitions to the weak-coupling regime, which corresponds well to the existence/disappearance of mode-splitting in <ref> below/above 110 K. On the other hand, g>(γ_exc-γ_cav)/2 is maintained up to about 185 K. Between 110 K and 185 K, coherent polariton modes are no longer supported in the structure but mode-splitting remains in the reflectance spectrum (Supplementary Figure S3).Importantly, the temperature dependence of the polariton PL intensity reveals that strong coupling enables significant enhancement of the quantum yield of WSe_2 at low temperatures.It has been shown that the quantum yield of the bright excitonic states are strongly suppressed by 10-100 fold in bare WSe_2 monolayers due to relaxation to dark excitons lying at lower energies than the bright excitons <cit.>. In contrast, the WSe_2-PC polariton intensity decreases by less than two-fold from 200 K to 10 K. This is because coupling with the PC greatly enhances the radiative decay of the WSe_2 exciton-polariton states in comparison with scattering to the dark exciton states, effectively improving the quantum yield of the bright excitons. §.§ Room temperature WS_2-PC polaritonsTo form exciton-polaritons at room temperature, we use WS_2 because of the large oscillator-strength to linewidth ratio at 300 K compared to WSe_2 (Supplementary Figure S4). We use a 1D PC that matches the resonance of the WS_2 exciton at 300 K. The angle-resolved reflectance spectrum from the bare PC again shows a clear, sharp dispersion (<ref>a). The broadband background pattern is due to the FP resonance of the substrate. With a monolayer of WS_2 placed on top, anti-crossing LP and UP branches form, as clearly seen in both the reflectance and PL spectra (<ref>b-c). The data (left panels) are in excellent agreement with the simulated results (right panels). The dispersions measured from PL fit very well with the coupled oscillator model in <ref>, from which we obtain an exciton-photon interaction strength of g=12.4±0.36 meV, above γ_exc=11 meV, γ_cav=4.5 meV, and √((γ_exc^2+γ_cav^2)/2)=8.4 meV. The Rabi splitting is 2ħΩ = 22.2 meV.§.§ Adjustable reflectance spectra with Fano resonancesLastly, we look into two unconventional properties of the reflectance of the TMD-PC polariton systems: an adjustable reflectance background, and highly asymmetric Fano resonances. As shown in <ref> and <ref>, a broadband background exists in the reflectance spectra for both WSe_2-PC and WS_2-PC polariton systems, arising from the FP resonances of the substrate. The height and width of this broadband background is readily adjusted by the thickness of the SiO_2 spacer layer, un-correlated with the quality factor of the PC modes or the lifetime of the polaritons. For example, the WSe_2-PC polaritons are in the low-reflectance region of the FP bands (<ref>a), while the WS_2-PC polaritons are in the high-reflectance region (<ref>b). In contrast, in conventional FP cavities, high cavity quality factor dictates that the polariton modes are inside a broad high-reflectance stop-band, making it difficult to excite or probe the polariton systems at wavelengths within the stop-band. The adjustability of the reflectance in PC-polariton systems will allow much more flexible access to the polariton modes and facilitate realization of polariton lasers, switches and other polariton nonlinear devices.Another feature is the asymmetric Fano line shape of the PC and PC-polariton modes in the reflectance spectra (<ref>). The Fano resonance arises from coupling between the sharp, discrete PC or PC-polariton modes and the continuum of free-space radiation modes intrinsic to the 2D-slab structure <cit.>. Such Fano line shapes are readily tuned by varying the phase difference between the discrete mode and the continuum band. For example, the PC and WSe_2-PC polariton modes located at the valley of the FP band (<ref>a) have a nearly-symmetric Lorentzian-like line shape, but the PC and WS_2-PC polariton modes at the peak of the FP band feature a very sharp asymmetric Fano line shape (<ref>b). This is because of the π phase difference between the peak and valley of the FP bands.We compare the measured spectra with the standard Fano line shapes described by:R=R_F((ϵ+q)^2/ϵ^2+1-1)+R_FP+I_b,The first term describes the Fano resonance, where R_F is the amplitude coefficient,q is the asymmetry factor, ϵ=ħ(ω-ω_0)/γ_0 is the reduced energy,ħω_0 and γ_0 are the resonant energy and half linewidth of the discrete mode. R_FP(ω) and I_b are the FP background reflectance and a constant ambient background, respectively. We use the transfer matrix method to calculate R_FP, then fit our data to <ref> to determine the Fano parameters. For the WSe_2-PC spectrum, we obtain q_cav=5.0, q_LP=3.5, q_UP=4.1, for the PC, LP and UP modes, respectively. The large values of q suggest small degrees of asymmetry and line shapes close to Lorentzian, as seen in <ref>a. For the WS_2 device, we obtain q_cav=1.16, q_LP=0.92, and q_UP=1.37, which are close to 1. This corresponds to a much more asymmetric line shape with a sharp Fano-feature, as seen in <ref>b.We note that, despite the striking Fano resonance in reflectance, strong coupling takes place only between the exciton and the sharp, discrete, tightly confined PC modes. This is evident from the symmetric line shape of the WS_2-polariton PL spectra <ref>d. Fano resonances with polariton states as the discrete modes will enable control of the Fano line shape by angle and detuning. <cit.> § DISCUSSIONIn short, we demonstrate integration of two of the most compact and versatile systems – atomically thin vdWCs as the active media and PCs of deep sub-wavelength thicknesses as the optical structure – to form an untra-compact and designable polariton system. TMD-PC polaritons were observed in monolayer WS_2 at room temperature and in WSe_2 up to 110 K,which are the highest temperatures reported for unambiguous determination of strong-coupling for each type of TMD, respectively. The TMD-PC polaritons feature highly anisotropic energy-momentum dispersions, adjustable reflectance with sharp Fano resonances, and strong suppression of non-radiative loss to dark excitons. These features will facilitate control and optimization of polariton dynamics for nonlinear polariton phenomena and applications, such as polariton amplifiers <cit.>, lasers <cit.>, switches <cit.> and sensors <cit.>.The demonstrated quasi-2D TMD-PC polariton system is readily extended to 0D, 1D and coupled arrays of polaritons <cit.>. The 1D PC already has many design parameter for mode-engineering;it can be extended to 2D PCs for even greater flexibility, such as different polarization selectivity <cit.> for controlling the spin-valley degree of freedom <cit.>. The TMDs can be substituted by and integrated with other types of atomically-thin crystals, including black phosphorous for wide band-gap tunability<cit.>, graphene for electrical control<cit.>, and hexagonal boron-nitride for field enhancement. PCs feature unmatched flexibility in optical-mode engineering, while vdWCs allow unprecedented flexibility in integration with other materials, structures, and electrical controls <cit.>. Combining the two in the strong coupling regime opens a door to novel polariton quantum many-body phenomenon and device applications <cit.>.§ METHODS Sample fabrication. The devices shown in <ref> were made from a SiN layer grown by low pressure chemical vapor deposition on a SiO_2-capped Si substrate. The SiN layer was partially etched to form a 1D grating, which together with the remaining SiN slab support the desired PC modes. The grating was created via electron beam lithography followed by plasma dry etching. Monolayer TMDs are prepared by mechanical exfoliation from bulk crystals from 2D Semiconductors and transferred to the grating using Polydimethylsiloxane (PDMS). For the WSe_2 device, the grating parameters are: Λ=468 nm, η=0.88, t=113 nm, h=60 nm, d= 1475 nm. For the WS_2 device, the grating parameters are:Λ=413 nm, η=0.83, t=78 nm, h=40 nm, d= 2000 nm. Optical measurements. Reflection and PL measurements were carried out by real-space and Fourier-space imaging of the device. An objective lens with numerical aperture (N.A.) of 0.55 was used for both focusing and collection. For reflection, white light from a tungsten halogen lamp was focused on the sample to a beam size of 15 μm in diameter. For PL, a HeNe laser (633 nm) and a continuous-wave solid state laser (532 nm) were used to excite the monolayer WSe_2 and WS_2, respectively, both with 1.5 mW and a 2 μm focused beam size. The collected signals were polarization resolved by a linear polarizer then detected by a Princeton Instruments spectrometer with a cooled charge-coupled camera. RCWA simulation. Simulations are carried out using an open-source implementation of RCWA developed by Pavel Kwiecien to calculate the electric-field distribution of PC modes, as well as the reflection and absorption spectra of the device as a function of momentum and energy.The indices of refraction of the SiO_2 and SiN are obtained from ellipsometry measurements to be n_SiO2 = 1.45+0.0053/λ^2 and n_SiN=2.0+0.013/λ^2 , where λ is the wavelength in the unit of μm. The WSe_2 and WS_2 monolayers were modelled with a thickness of .7 nm, and the in-plane permittivities were given by a Lorentz oscillator model:ϵ(E) = ϵ_B + f/E_x^2 - E^2 - iΓ E.For WSe_2, we used oscillator strength f_WS_2=0.7 eV^2 to reproduce the Rabi splitting observed in experiments, exciton resonance E_WSe_2=1.742 eV and full linewidth Γ_WSe_2=11.4 meV based on TM exciton PL, and background permittivity ϵ_B, WSe_2=25<cit.>. Likewise, for WS_2, we used f_WS_2=1.85 eV^2, E_WS_2=2.013 eV and Γ_WS_2=22 meV measured from a bare monolayer, and ϵ_B, WS_2= 16<cit.>.Modeling the temperature dependence of the WSe_2 exciton energy and linewidth.The exciton resonance energies redshift with increasing temperature as shown in <ref>a. It is described by the standard temperature dependence of semiconductor bandgaps<cit.> as follows:E_g(T)=E_g(0)-Sħω[(ħω/2kT-1)].Here E_g(0) is the exciton resonance energy at T=0 K, S is a dimensionless coupling constant, and ħω is the average phonon energy, which is about 15 meV in monolayer TMDs<cit.>. The fitted parameters are: E_g(0)=1.741 and S=2.2, which agree with reported results<cit.>.The exciton linewidth γ_exc as a function of temperature can be described by the following model<cit.>:γ_exc=γ_0+c_1T+c_2/e^ħω/kT-1.Here γ_0 is the linewidth at 0 K, the term linear in T depicts the intravalley scattering by acoustic phonons, and the third term describes the intervalley scattering and relaxation to the dark state through optical and acoustic phonons<cit.>.The average phonon energy is ħω=15 meV. The fitted parameters are: γ_0=11.6 meV, c_2=25.52 meV, and c_1 is negligibly small in our case<cit.>.§ ACKNOWLEDGMENTLZ, RG and HD acknowledge the support by the Army Research Office under Awards W911NF-17-1-0312 and the Air Force Office of Scientific Research under Awards FA9550-15-1-0240. WB and ET acknowledge the support by National Science FoundationGrant EECS-1610008. The fabrication of the PC was performed in the Lurie Nanofabrication Facility (LNF) at Michigan, which is part of the NSF NNIN network. nature3
http://arxiv.org/abs/1706.08464v3
{ "authors": [ "Long Zhang", "Rahul Gogna", "Will Burg", "Emanuel Tutuc", "Hui Deng" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170626163534", "title": "Photonic-Crystal Exciton-Polaritons in Monolayer Semiconductors" }
firstpage–lastpage Reduced Electron Exposure for Energy-Dispersive Spectroscopy using Dynamic Sampling Charudatta PhatakcorrespAuthmyfootnote December 30, 2023 =================================================================================== Department of Physics, Technion – Israel Institute of Technology, Haifa 32000, Israel; [email protected]; [email protected] following the evolution of several observed exoplanetary systems we show that by lowering the mass loss rate of single solar-like stars during their two giant branches, these stars will swallow their planets at the tip of their asymptotic giant branch (AGB) phase. This will most likely lead the stars to form elliptical planetary nebulae (PNe). Under the traditional mass loss rate these stars will hardly form observable PNe. Stars with a lower mass loss rate as we propose, about 15 per cent of the traditional mass loss rate of single stars, leave the AGB with much higher luminosities than what traditional evolution produces. Hence, the assumed lower mass loss rate might also account for the presence of bright PNe in old stellar populations. We present the evolution of four exoplanetary systems that represent stellar masses in the range of 0.9-1.3 M_⊙. The justification for this low mass loss rate is our assumption that the stellar samples that were used to derive the traditional average single-star mass loss rate were contaminated by stars that suffer binary interaction. stars:mass loss – stars: AGB and post-AGB – binaries: close – (stars:) planetary systems § INTRODUCTION §.§ Planetary Nebulae Observations and their interpretation strongly support the notion that most planetary nebulae (PNe) are shaped by a stellar companion that strongly interacts with the asymptotic giant branch (AGB) progenitor (e.g., limiting the list to some recent papers , ).PN catalogs (e.g., ) show that most PNe are not spherical, but rather bipolar or ellipticals, suggesting that most AGB stars that formed bright enough PNe to be observed have interacted with a companion. For more than 20 years theoretical studies have been suggesting that planets can also shape many PNe (e.g.,). Yet there is a problem for planets to shape PNe.For a planet to influence the mass loss from an AGB star, the envelope mass cannot be too large. In addition, the planet should interact with the AGB star on the upper AGB.These imply an envelope mass on the upper AGB of at most several× 0.1 M_⊙. If the star mass is not too low, with a zero age main sequence (ZAMS) mass of M_ 1,ZAMS 1.4 M_⊙,the orbit of the planet substantially increases by the time the star losses about third of its mass, ≈ 0.5 M_⊙. Tidal interaction can bring it closer to the star, but one should make sure the planet is not engulfed during the red giant branch (RGB) phase(e.g., ). If the star is a low-mass star, M_ 1,ZAMS 1.4 M_⊙,its radius on the RGB is not much smaller than that on the AGB, and if the planet is not engulfed on the RGB, it will also not be engulfed on the AGB.The maximum radius from which tidal forces cause a companion to spiral-in to the envelope of the giant star is called the tidal maximum capture radius.<cit.> conduct a thorough study of the tidal maximum capture radius ofRGB and AGB stars with initial masses of0.8 M_⊙ < M_ 1,ZAMS 4 M_⊙.We will not repeat their study, but rather point out the consequences of a much lower mass loss rate of RGB and AGB low mass primary stars. We will concentrate on the majority of PNe that are formed by post-AGB evolution, because one of our goals is to explain the brightest PNe in old populations. These PNe must come from post-AGB evolution. However, we do note the possibility of post-RGB PNe, e.g., the Boomerang Nebula <cit.>.<cit.> study possible post-RGB PNe, and list five candidates. They note that though the fraction of post-RGB PNe is estimated to be very low among all PNe, it might possibly be higher than present estimates.We are most interested in planets shaping PNe with an initial progenitor mass in the range of M_ 1,ZAMS≃ 0.9-1.3 M_⊙. For these stars the companion, whether a star, a brown dwarf, or a planet, has also the role of increasing the mass loss rate and expediting the evolution on the final AGB and the post-AGB phases. This enhanced mass loss rate is required for the PN to be bright enough to be observed <cit.>. But, in the commonly used mass loss rate these stars reach an RGB radius that is not much below, and even larger, than their maximum radius on the AGB. If, on the other hand, the low-mass star does not interact with a companion on the RGB, it will most likely not interact on the AGB.This scenario will vary if the mass loss rate on the RGB and AGB is much lower than what is usually used in stellar evolution codes, since the star then reaches a much larger radius on the AGB. Our goal is to explore the implications of a much lower mass loss rate on the giant branches of low mass stars on such systems.§.§ Mass loss A stellar companion at a close orbit, or inside the envelope, deposits angular momentum and energy to the envelope of a giant star and by that increases the mass loss rate.Low mass RGB and AGB stars are influenced not only be stellar, but also by sub-stellar (i.e., brown dwarfs and planets) companions that can deposit a substantial amount of angular momentum to the envelope when they are swallowed by the expanding giant star(e.g., ). In those cases deposition of angular momentum is more significant than deposition of energy, e.g., for the operation of a dynamo in the envelope of the giant star (e.g., ). <cit.> summarized the processes by which a planet can influence the mass loss rate and geometry from RGB and AGB stars. The main process is enhanced dust formation on the surface, that in turn facilitates the usage of the stellar luminosity to remove mass. Since most stars with ZAMS mass of M_ ZAMS 1 M_⊙ have close companions, stellar or sub-stellar (e.g., ), most stars suffer strong binary interaction before they turn to a white dwarf (WD). In many cases, in particular when the companion is of low mass and is swallowed by the star, the companion will not survive the evolution, and will leave a single WD. We therefore conclude that most giant stars for which the mass loss rate has been determined, whether directly or statistically from the relation between the ZAMS mass and the WD mass, suffered strong binary interaction.In other words, the mass loss rate formulae of RGB and AGB stars include in them a substantial component of binary interaction.We here take the view that the mass loss rate on the RGB and AGB of stars that did not suffer any binary interaction is much lower than what traditional fitting formulae suggest. We are aware that this is a speculative view, but nonetheless, because of the far reaching implications of this assumption, we explore one of its consequences. We further discuss the reduced mass loss rate in section <ref>. <cit.> study the influence of the mass loss rate on the fate of close planets. We differ from them in three significant aspects. Firstly, they do not follow the system to the AGB phase. Secondly, they lower the mass loss rate to 40 per cents of the regular mass loss rate assumed (η=0.2). We, based on our claim that the determination of the mass loss rate on the RGB and AGB is contaminated with many stars that suffer binary interaction with a stellar or a substellar object, explore the possibility that the mass loss rate might be as low as around 15 percent the commonly used value. Thirdly, we concentrate on lower mass stars M_ ZAMS≃ 0.9-1.3 M_⊙, while <cit.> study the range of M_ ZAMS = 1.5-2 M_⊙.These differences are significant as we find qualitatively very different results for the two mass loss rates (commonly used mass loss rate and lowered mass loss rate). §.§ The fate of known exoplanets The interaction of planets with stars evolving off the main sequence has been the focus of many studies for over two decades (e.g., , to list a small sample of all relevant papers).<cit.> are different than most studies in that they target about 300 known exoplanets. Their figure 4 very nicely summarizes their results. They take the commonly used mass loss rate. Eventually, a follow-up study of our present preliminary study will have to repeat the calculations of <cit.> but using a lower mass loss rate. We do note that if there are two planets and the inner one is swallowed by the giant, then the mass loss rate increases, hence increasing the survivability of the further out planets. We will not repeat the calculations that have been performed in those papers, but use some of the results.In particular, because of several uncertainties in the tidal capturing process, and our claim here for uncertainties in the mass loss rate, there is no need in the present preliminary suggestion of the low-mass loss rate to perform an accurate integration of the equations of motion during the tidal capture process. There are uncertainties in the tidal efficiency itself and there is much debate on the exact mechanism and resulting capture radius. <cit.> integrate the tidal equations of <cit.> and find that low mass AGB stars with an initial mass in the range of 0.8-4M_⊙ capture companions that are between 1 and 4 times the maximum giant radius. <cit.> study the planet orbit along the RGB phase of stars in the mass range of 1-5M_⊙ with a <cit.> mass loss efficiency parameterη=0.6,and find that under their assumptions a 1 M_ J planet companion, where M_ J is the mass of Jupiter, will be engulfed by a 1 M_⊙ star at a distance of a<3. <cit.> follow the thermal pulses along the AGB phase of low mass stars and their effect on a planet companion. They find that for a 1M_⊙ primary tidal forces, calculated with the <cit.> formalism, are strong enough to pull giant planets at an orbital separation of 3.Here, we crudely estimate from the results of <cit.> and of <cit.> that a planet will be tidally captured if the planet orbital separation when the star is on the main sequence isa_i < a_ cap≃ 3.1 R_∗, max( m_p/0.002M_1)^1/8 ,where R_∗, max is the maximum radius of the RGB or AGB star (not including pulsations), M_1 is the mass of the giant star, and m_p is the mass of the planet or brown dwarf.The inequality (<ref>) is basically equation (6) of <cit.>, that is calibrated for the case of a planet mass equal to 2M_J, and interacting with a ≈ 1M_⊙ RGB or AGB star. For eccentric orbits the value of a_ cap becomes even larger. The value of the capture orbital separation a_ cap weakly depends also on other parameters (see the relevant references), but these are not significant for our study.§ REDUCED MASS LOSS RATEIn this study we will not discuss the mass loss mechanisms that are the topics of many reviews(e.g., ). We here only apply our assumption that the mass loss rate of giant stars that did not suffer any interaction with a massive planet, a brown dwarf, or a stellar companion, is very low. For that we will consider the Reimers empirical mass loss rate for red giant stars <cit.>, that can be written as Ṁ=η×4×10^-13 LM^-1R,where the stellar luminosity L, mass M, and radius R are in solar units,and η is the mass loss rate efficiency parameter that is determined from observations.<cit.> conduct a detailed and thorough study on the value of the mass loss rate on the RGB in globular clusters. They use the horizontal branch (HB) morphology to deduce the value of η, and find a median value of η=0.477. Below we will use η=0.5 to follow the evolution of stars under the commonly assumed mass loss rate (e.g., ). We study the effects of a reduced mass loss rate efficiency parameterfor stars that did not suffer any interaction with a companion and present the results for a representative value of η=0.07. We do not see a contradiction between our assumption and the results of <cit.> for the following reasons. (1) We agree that the typical value of η for solar type stars is around ≈ 0.5. Yet, we claim that this typical value includes many RGB and AGB stars that suffer interaction with a companion, being stellar or sub-stellar.(2) Their mass loss formula with η≃ 0.5 cannot cover the entire stellar population on the HB. In particular it cannot cover bluer HB stars. Indeed, already <cit.> noted that a range of values of η is required to produce the population of stars on the HB (unless helium abundance forms it).(3) There are observations that the study of <cit.> cannot account for. In particular some bright AGB stars, up to 5000 L_⊙. It is exactly those bright AGB and post-AGB stars of low mass progenitors that our proposed low mass loss rate intends to explain, as well as the shaping of PNe by planets. We justify our usage of a low mass loss rate for giant stars that suffer no binary interaction by restating our basic claim as follows. The sample of giant, or post-giant, stars that have been used to deduce the semi-empirical mass loss rate formulae in different studies (in open clusters, in globular clusters, in the field) are substantially contaminated with stars that did suffer interaction with stellar and sub-stellar objects. These companions enhance the mass loss rate by an appreciable factor. In most cases of claimed single-star evolution, the companion that enhanced the mass loss rate did not survive the binary evolution, e.g., it was tidally destructed or it merged with the core of the giant star. We start by studying stellar models of low mass stars with ZAMS mass M_ 1,ZMAS 1.3M_⊙ , and for each compare the evolution with the commonly used mass loss to the evolution with reduced mass loss. For the commonly used mass loss rate we take the typical value of the Reimers parameter for solar type stars η≈ 0.5, as mentioned above. For the reduced mass loss rate we present the evolution with a representative case of a much lower mass loss rate of η=0.07, both for the RGB and the AGB. In section <ref> we explain the reason for presenting the case of η=0.07 among the several different values that we have studied (see full results in the Appendix).The notion of a much lower mass loss-rate is not new. <cit.>, for example, find that for the old metal-rich cluster NGC 6791 the red giant mass loss rate should be lower than the typically taken, where the efficiency parameter might be as low as η=0.1 and up to 0.3. For NGC 6819 they find that the RGB efficiency parameter could be very low, yet as it is a young cluster the constraints on η are less compelling.We differ in that we conduct a systematic comparison, and attribute the low mass loss rate to stars that suffered no interaction with a companion. § THE SAMPLE OF EXOPLANETS To study the effect of a low mass loss rate of low mass stars, M_ 1,ZMAS 1.3M_⊙, on the fate of planets we perform stellar evolution simulations of a sample of observed exoplanetary systems.We take the systems from The Extrasolar Planets Encyclopaedia,<http://exoplanet.eu/catalog/> <cit.>, according to the following criteria.The star mass is in a mass range of M_ 1,ZMAS:0.9- 1.3M_⊙ with metallicity of about solar, Z=0.02, the planet mass is in a mass range of m_p≃ 1-10M_J, low eccentricity, and a semi-major axis of a_i≃ 2-5 R_⊙.In Table <ref> we list the four exoplanetary systems that we present here.We preset these 4 systems because the mass of their stars span the mass range of interest, M_ 1,ZMAS:0.9- 1.3M_⊙ and the orbital separations of their planets demonstrate the effects we study. We point out that these four systems are not unique, and that there are other exoplanetary systems that are compatible with our criteria (e.g., HD 72659 b, HD 108874 c,HD 222155 b, and more), but they will not shed new light on the phenomena we study.§ EVOLUTIONWe use the stellar evolution code(Modules for Experiments in Stellar Astrophysics)version 9575 (). For each of the stars in our sample (see section <ref>) we calculate the stellar evolution from ZAMS until the formation of a WD and study the differences between two evolutionary paths, one with regular and one with low mass loss rate as described in section <ref>. The Reimers mass loss prescription is taken for the RGB and the prescription of <cit.> is taken for the AGB. We are interested in determining the fate of the planet, namely whether the star will swallow the planet on the RGB or on the AGB, or not at all. We studied the evolution with six different values of the mass loss rate parameter,η=0.5, 0.35, 0.25, 0.15, 0.07, and 0.05. We found that the processes we study here, of planets interacting with AGB stars and bright post-AGB stars, are obtained in most cases for η=0.15. For example, already for η =0.15 the maximum radius on the AGB is significantly larger than that on the RGB. To clearly demonstrate these processes, however, we chose to present here the results for a representative value of η=0.07. We somewhat arbitrarily chose this value as it falls in the relevant range, 0.05≲η≲ 0.15, where the effects of a reduced mass loss rate are more pronounced than for higher values of η > 0.1.We compare the η=0.07 evolution with the commonly used η=0.5 value. In the Appendix we compare the evolution of the six different values of η and present the full results.In Fig. <ref> we present the stellar radius during the post-main sequence evolution, R_∗, and the ratio of the stellar radius to the orbital separation (semi-major axis) of the planet, for the four exoplanetary systems that we list in Table <ref>.The solid lines in all panels are for the evolution with thethe commonly used mass loss rate, η=0.5 in equation (<ref>), while the dashed lines depict the evolution with a reduced mass loss rate with a representative efficiency parameter of η=0.07. In calculating the evolution of the orbital separation a we consider only the mass loss process, and do not include tidal forces (as we explained in section <ref>).Let us apply the approximate expression for the tidal maximum capture radius (eq. <ref>) to our results. We can express the capture condition as R_∗, max/a_i0.32, with large uncertainties. Since the orbital separation increases only slightly before the low mass star reaches the upper AGB, this condition can be written also for the evolving orbital separation as R_∗, max/a0.3, again, with large uncertainties.It is evident from our results that for the case of evolution with the commonly used mass loss rate, solid lines in Fig. <ref>, our sample stars will not (for M_i=0.9 M_⊙) or only marginally, if at all, engulf their planet companions during the RGB phase. Moreover, if the planet is not engulfed during the RGB phase, it will not be engulfed during the AGB phase, or barely so for the M_i=1.29 M_⊙ case.In other words, for the commonly used mass loss rate the orbital separation for which planets can be engulfed during the AGB phase of low mass stars is tiny or doesn't exist. If the planet is engulfed on the RGB it will increase the mass loss rate, and the star will not reach the AGB at all, or only the lower part of the AGB. No PN will be formed.If the planet is not engulfed at all, then it is most likely that the star will form a spherical and very faint PN (e.g., ).The fate of the systems in the cases with the low mass loss rate can be much different. In these cases the maximum stellar radius on the AGB becomes much larger than that on the RGB. There is a relatively significant range of initial orbital separations for which the capture condition R_∗, max/a0.3 does not hold on the RGB (or only marginally so), while it is fulfilled on the AGB. This implies that the planet will be engulfed while the star is on the upper AGB. The engulfed planet will enhance the mass loss rate and make the nebula denser and elliptical. Overall, this evolution can lead to an observed PN. In Table <ref> we present some properties of the exoplanetary systems that we evolved. We note that the lower mass loss rate implies also that the luminosity of the central star is larger than in the commonly used mass loss rate (column 9). It is therefore possible that some of the PNe will be [OIII] bright, despite that they result from stars of only M_i ≃ 1-1.2 M_⊙. This might contribute to the explanation of why some bright PNe are observed even in old stellar populations (e.g., ).To further emphasize the difference between the commonly used and the low mass loss rates, we examine the operation of the tidal forces.The rate of spiraling-in due to tidal forces depends on several properties of the star. But for a given orbital separation it mainly depends on the stellar radius to the power of 8 <cit.>. The efficiency of the tidal forces to bring the planet into the envelope, therefore, is about proportional to the integral of ∫ R^8_* dt. We define the f_ AR ratio as the ratio of this integral on the AGB to that on the RGB to the power of 1/8, f_ AR≡[ ∫_T_ AGB R^8(t)dt/∫_T_ RGB R^8(t)dt]^1/8,where T_ AGB is the total lifetime on the AGB and T_ RGB is the total lifetime on the RGB. If f_ AR> 1, then there is a range of initial orbital separations for which a planet can survive tidal capture during the RGB phase of its parent star, and be engulfed during the stellar AGB phase. The system will form an elliptical PN in that case.We list the values of the f_ AR ratio for the different cases in Table <ref> (column 8). The value of the f_ AR ratio for the low mass loss rate case for each system is much larger than that of the regular mass loss rate. In Fig. <ref> we compare the evolution of the stellar radii on the RGB and AGB phases of each star. To facilitate such a comparison we stretch the time scale of the AGB phase by a factor of s_ AGB, as given in each panel.In the upper panel of each star we present the evolution with the commonly used mass loss rate, η=0.5, and in the lower panel we present the evolution with a reduced mass loss rate with a representative efficiency parameter of η=0.07. For the tidal interaction in the cases of a low mass loss rate, the larger radii on the AGB have a larger effect than the longer duration of the RGB. This ,again, shows that there is a range of orbital separations for planets to survive the RGB phase but to be engulfed during the AGB phase.The most significant conclusion from our results, as presented in Fig. <ref>, Fig. <ref> and Table <ref>,is that for the reduced mass loss rate there is a relatively large orbital separation range for which a star can capture a planet during its AGB phase. We showed this for four observed exoplanetary systems. A planet of Jupiter mass, or any more massive companion, that is captured into the very extended and low mass envelope (M_ env 0.5 M_⊙) of an upper AGB low mass star will enhance the mass loss rate and will lead to the formation of an elliptical PN. With the commonly used mass loss rate a spherical and very faint PN is expected, or no PN at all. § DISCUSSION AND SUMMARYIn this paper we assume that the mass loss rate of low mass giant stars, M_ 1,ZMAS:0.9- 1.3M_⊙, that suffer no interaction with a companion, stellar or sub-stellar, is very low. In other words, we claim that the samples of stars that have been used in different studies over the years to derive the mass loss rate on the giant branches were contaminated with stars that suffered binary interaction. Such binary interaction substantially enhances the mass loss rate. To study the effects of a reduced mass loss rate, we changed the mass loss parameter in equation (<ref>) and studied the evolution with six different values,η=0.5, 0.35, 0.25, 0.15, 0.07, and 0.05 (see Appendix). We found that the processes we study here, of planets interacting with AGB stars and brightpost-AGB stars, are obtained in most cases for 0.05≲η≲ 0.15.To clearly demonstrate these processes we presented the results for a representative valueof η=0.07.We examined some consequences of our assumed low mass loss rate. We simulated the evolution of the parent stars of four exoplanetary systems (listed in Table <ref>). We followed the evolution of the stellar radii and the planetary orbital separations under the assumption of both the commonly used and the reduced mass loss rates. The results are presented in Fig. <ref> and are summarized in Table <ref> (for the entire study of the effects of different mass loss rate parameters on low mass stars see Appendix).Our somewhat speculative assumption has several implications, including possible solutions to some riddles. The fraction of PNe that are shaped by planets. Previous studies argued that some fraction of PNe were shaped by brown dwarf or planet companions (e.g.,and ). The rest were shaped by stellar companions. Under the assumed low mass loss rate, AGB stars reach much larger radii, in particular relative to their maximum radius on the RGB. This leaves a relatively large range of initial planetary orbital separations for planets to be engulfed during the upper AGB phase of their parent stars. The much larger radius of the AGB stars not only increases the chance for planetary interaction, but also implies that the envelope is more vulnerable to the influence of the planet towards higher final mass loss rate. A higher mass loss rate implies also a faster post-AGB evolution. Both effects make the formation of a PN much more likely. our assumed low mass loss rate, therefore, supports the notion that a fraction of elliptical PNe were shaped by sub-stellar companions. The planetary nebula luminosity function (PNLF). Observations show that the brightest PNe in [O III] λ 5007, i.e., the bright-end cutoff of the PNLF, does not depend on the age or metallicity of the stellar population (e.g., ). Namely, old stellar populations for which stars of initial mass of M_ 1,ZAMS≃ 1-1.2 M_⊙ are forming PNe, have their brightest [O III] λ 5007 PNe as young stellar populations have. This is still a puzzle as the post-AGB stellar luminosity to ionize the nebula should be at least ≈ 5000 L_⊙.Our results hint to a possible solution to this puzzle.As evident from the last column of Table <ref>, in the case of a reduced mass loss rate the post-AGB stellar luminosity of stars with initial masses of M_ 1,ZAMS 1.05 M_⊙ is L_ pAGB 5000 L_⊙. This, together with the interaction with a low mass companion on the upper AGB will have the necessary ingredient for a bright PN, a dense nebula of mass 0.2 M_⊙ and a bright central ionizing star. The full solution to the puzzle might include another component. <cit.> finds in a new set of simulations that the post-AGB luminosity values are higher than the values obtained in older simulations. He does not fully address the topic of the PNLF in old stellar populations, whereas we here present preliminary results on the matter. <cit.> does show a fit to the PNLF of NGC 4697 based on the <cit.> tracks, yet notes that it is not a full solution to the PNLF puzzle. It is conceivable that the new simulations of <cit.> together with our assumed low mass loss rate with a late engulfment of a planet (or a brown dwarf or a low mass star), account for the brightest PNe in old stellar populations. We point out another difference in the two works, where in contrast to <cit.> we focus on binary-shaped PNe (non spherical).The initial-final mass relation. There is an observed relation between the initial mass of stars and the mass of their descendant WDs(e.g., ). Our assumed lower mass loss rate does not change much this relation for the following reasons. (1) Because most stars are expected to interact with stellar or sub-stellar companions, the fraction of stars that suffer no binary interaction at all is small. (2) Non-interacting stars with low mass loss rates are supposed to account only for the more massive WD masses for each initial mass. The middle part of the final mass distribution for each initial mass, where most stars belong to,does not change. The fate of the Earth.Because of sensitivity to unknown tidal interaction parameters, and even to external planets <cit.>, studies have reached contradicting conclusions about the fate of the Earth, i.e., whether the Earth will survive engulfment (e.g., ), or whether the Sun will engulf the Earth (possibly already during the RGB peak of the sun; ).Our assumption of a much lower mass loss rate on the giant branches, if holds for the sun,implies that the sun will swallow the Earth.§ APPENDIX: EVOLUTION WITH DIFFERENT REDUCED MASS LOSS RATES We calculate the stellar evolution of low mass stars at an initial mass range of M_ 1,ZMAS:0.9- 1.3M_⊙ from ZAMS until the formation of a WD with different mass loss rate efficiency parameters: η=0.5, 0.35, 0.25, 0.15, 0.07, and 0.05. We study the effects of the different mass loss rates on the same four exoplanetary systems that are used in the main text (section <ref>) and follow the same routine as in section <ref>. In Fig. <ref> we present the evolution of the radius and the radius over the orbital separation for the exoplanetary system HD 159868 b, that has a 1.09M_⊙ star, a 2.1M_J mass planet, and an initial semi-major axis of a_i=2.25AU. It can be seen that the maximum radius on the AGB is larger than that on the RGB even for a value of η as high as η =0.15. In Table <ref> we show the results for the 6 different mass loss rates. As seen in our results for an efficiency parameter of η≲0.15,the maximum value of the ratio of the stellar radius to the orbital separation (semi-major axis)of the planet is larger when the star is on the AGB compared to that on the RGB (columns 6,7). Moreover, the capture condition, R_∗, max/a0.3, holds on the AGB rather than on the RGB, though this is marginal for the case of the M_ 1,ZAMS=0.9M_⊙ star in HD 293027 A b. In addition, the f_ AR ratio (eq. <ref>) is larger than unity (column 8). These results imply that in each case that the planet will survive the RGB and will be engulfed on the AGB for a range of initial semi-major axis and the system will form an elliptical PN.Overall a mass loss rate efficiency parameter of η 0.15 will be sufficient in accounting for our claims in explaining the processes of planets interacting with AGB stars and accounting for shaping non-spherical PNe. In the main text of the paper we take a representative mass loss rate parameter of η=0.07, rather than η > 0.1, since for this value the effects we study are more pronounced and can be better explained and better compared with the effects of the evolution with the commonly used mass loss rate parameter of η=0.5. When examining the post-AGB luminosities (column 9 in Table <ref>) we find that in order to explain the high luminosities in old stellar populations, L_ pAGB 5000 L_⊙, our results are marginal for the lower end of our mass range. Furthermore, an efficiency parameter of η =0.15 cannot fully produce the high luminosities needed for old stellar populations and the PNLF. We raise the possibility that our work on reduced mass loss rates combined with the work of <cit.> on stellar evolution might hold the answer to the PNLF puzzle by taking an efficiency parameter of η 0.1. § ACKNOWLEDGMENTSWe thank an anonymous referee for very helpful and detailed comments that improved the manuscript. This research was supported by the Israel Science Foundation, by the E. and J. Bishop Research Fund at the Technion, and by the Prof. A. Pazy Research Foundation.[Aguilera-Gómez et al.(2016)]AguileraGomezetal2016 Aguilera-Gómez, C., Chanamé, J., Pinsonneault, M. H., & Carlberg, J. K. 2016, , 829, 127 [Balick(1987)]Balick1987 Balick, B. 1987, , 94, 671[Bowler et al.(2010)]Bowleretal2010 Bowler, B. P., Johnson, J. A., Marcy, G. W., et al. 2010, , 709, 396[Bloecker(1995)]Bloecker1995 Bloecker, T. 1995, , 297, 727 [Carlberg et al.(2009)]Carlbergetal2009 Carlberg, J. K., Majewski, S. R., & Arras, P. 2009, , 700, 832[Chen et al.(2017)]Chenetal2017 Chen, Z., Frank, A., Blackman, E. G., Nordhaus, J., & Carroll-Nellenback, J. 2017, , 468, 4465 [Chiotellis et al.(2016)]Chiotellisetal2016 Chiotellis, A., Boumis, P., Nanouris, N., Meaburn, J., & Dimitriadis, G. 2016, , 457, 9[Chu et al.(1987)]Chuetal1987 Chu, Y.-H., Jacoby, G. H., & Arendt, R. 1987, , 64, 529[Ciardullo(2010)]Ciardullo2010 Ciardullo, R. 2010, , 27, 149[Corradi & Schwarz(1995)]CorradiSchwarz1995 Corradi, R. L. M., & Schwarz, H. E. 1995, , 293, 871[D'Cruz et al.(1996)]DCruz1996 D'Cruz, N. L., Dorman, B., Rood, R. T., & O'Connell, R. W. 1996, , 466, 359[De Marco(2009)]DeMarco2009 De Marco, O. 2009, , 121, 316 [De Marco & Moe(2005)]DeMarcoMoe2005 De Marco, O., & Moe, M. 2005, Planetary Nebulae as Astronomical Tools, 804, 169[De Marco et al.(2016)]DeMarcoetal2016 De Marco, O., Reichardt, T., Iaconi, R., et al. 2016, arXiv:1612.03515[De Marco & Soker(2011)]DeMarcoSoker2011 De Marco, O., & Soker, N. 2011, , 123, 402[Fuhrmann et al.(1997)]Fuhrmannetal1997 Fuhrmann, K., Pfeiffer, M. J., & Bernkopf, J. 1997, , 326, 1081 [García-Rojas et al.(2016)]GarciaRojasetal2016 García-Rojas, J., Corradi, R. L. M., Monteiro, H., Jones, D., Rodríguez-Gil, P., & Cabrera-Lavers, A. 2016, , 824, L27[García-Segura et al.(2014)]GarciaSeguraetal2014 García-Segura, G., Villaver, E., Langer, N., Yoon, S.-C., & Manchado, A. 2014, , 783, 74[Guo et al.(2017)]Guoetal2017 Guo, J., Lin, L., Bai, C., & Liu, J. 2017, , 362, 15 [Harvey et al.(2016)]Harveyetal2016 Harvey, E., Redman, M. P., Boumis, P., & Akras, S. 2016, , 595, A64[Heo et al.(2016)]Heoetal2016 Heo, J.-E., Angeloni, R., Di Mille, F., Palma, T., Chang, S.-J., Hong, C.-L., & Lee, H.-W. 2016, Journal of Physics Conference Series, 728, 072014[Hillwig et al.(2016a)]Hillwigetal2016a Hillwig, T. C., Bond, H. E., Frew, D. J., Schaub, S. C., & Bodman, E. H. L. 2016a, , 152, 34[Hillwig et al.(2016b)]Hillwigetal2016bHillwig, T., Jones, D., De Marco, O., Bond, H., Margheim, S., & Frew, D. 2016b, , 832, 125 [Hillwig et al.(2017)]Hillwigetal2017 Hillwig, T. C., Frew, D. J., Reindl, N., Rotter, H., Webb, A., & Margheim, S. 2017, , 153, 24 [Jones(2016)]Jones2016 Jones, D. 2016, arXiv:1611.08454 [Jones & Boffin(2017)]JonesBoffin2017 Jones, D., & Boffin, H. M. J. 2017, Nature Astronomy, 1, 0117[Jones et al.(2016)]Jonesetal2016 Jones, D., Wesson, R., García-Rojas, J., Corradi, R. L. M., & Boffin, H. M. J. 2016, , 455, 3263[Kalirai et al.(2008)]Kaliraietal2008 Kalirai, J. S., Hansen, B. M. S., Kelson, D. D., et al. 2008, , 676, 594-609 [Lafon & Berruyer(1991)]Lafon Berruyer1991 Lafon, J.-P. J., & Berruyer, N. 1991, , 2, 249[Macdonald et al.(2017)]Macdonaldetal2017 Macdonald, D., De Marco, O., Lagadec, E., Ma, J., & Chesneau, O. 2017, arXiv:1705.00120[Madappatt et al.(2016)]Madappattetal2016 Madappatt, N., De Marco, O., & Villaver, E. 2016, , 463, 1040 [Manchado et al.(1996)]Manchadoetal1996 Manchado, A., Guerrero, M. A., Stanghellini, L., & Serra-Ricart, M. 1996, The IAC morphological catalog of northern Galactic planetary nebulae, Publisher: La Laguna, Spain: Instituto de Astrofisica de Canarias (IAC), 1996, Foreword by Stuart R. Pottasch, ISBN: 8492180609,[McDonald & Zijlstra(2015)]McDonaldZijlstra2015 McDonald, I., & Zijlstra, A. A. 2015, , 448, 502 [Mendez(2017)]Mendez2017 Mendez, R. H. 2017, arXiv:1610.08625 [Meynet et al.(2017)]Meynetetal2017 Meynet, G., Eggenberger, P., Privitera, G., et al. 2017, , 602, L7 [Miglio et al.(2012)]Miglioetal2012 Miglio, A., Brogaard, K., Stello, D., et al. 2012, , 419, 2077 [Miller Bertolami(2016)]Bertolami2016 Miller Bertolami, M. M. 2016, , 588, A25 [Miszalski et al.(2017)]Miszalskietal2017 Miszalski, B., Manick, R., Mikołajewska, J., Iךkiewicz, K., Kamath, D., & Van Winckel, H. 2017, arXiv:1703.10891[Moe & De Marco(2006)]MoeDeMarco2006 Moe, M., & De Marco, O. 2006, , 650, 916[Moutou et al.(2014)]Moutouetal2014 Moutou, C., Hébrard, G., Bouchy, F., et al. 2014, , 563, A22 [Mustill et al.(2014)]Mustilletal2014 Mustill, A. J., Veras, D., & Villaver, E. 2014, , 437, 1404[Mustill & Villaver(2012)]MustillVillaver2012 Mustill, A. J., & Villaver, E. 2012, , 761, 121[Naef et al.(2010)]Naefetal2010 Naef, D., Mayor, M., Lo Curto, G., et al. 2010, , 523, A15 [Nordhaus & Blackman(2006)]NordhausBlackman2006 Nordhaus, J., & Blackman, E. G. 2006, , 370, 2004[Nordhaus & Spiegel(2013)]NordhausSpiegel2013 Nordhaus, J., & Spiegel, D. S. 2013, , 432, 500 [Nordhaus et al.(2010)]Nordhausetal2010 Nordhaus, J., Spiegel, D. S., Ibgui, L., Goodman, J., & Burrows, A. 2010, , 408, 631[Parker et al.(2016)]Parkeretal2016 Parker, Q. A., Bojičić, I. S., & Frew, D. J. 2016, Journal of Physics Conference Series, 728, 032008 [Paxton et al.(2011)]Paxtonetal2011 Paxton, B., Bildsten,L., Dotter, A., et al. 2011, , 192, 3[Paxton et al.(2013)]Paxtonetal2013 Paxton, B., Cantiello, M., Arras, P., et al. 2013, , 208, 4 [Paxton et al.(2015)]Paxtonetal2015 Paxton, B., Marchant, P., Schwab, J., et al. 2015, , 220, 15 [Reimers(1975)]Reimers1975 Reimers, D. 1975, Memoires of the Societe Royale des Sciences de Liege, 8, 369 [Rybicki & Denis(2001)]RybickiDenis2001 Rybicki, K. R., & Denis, C. 2001, , 151, 130[Sahai & Trauger(1998)]SahaiTrauger1998 Sahai, R., & Trauger, J. T. 1998, , 116, 1357[Sahai et al.(2011)]Sahaietal2011 Sahai, R., Morris, M. R., & Villar, G. G. 2011, , 141, 134 [Sahai et al.(2017)]Sahaietal2017 Sahai, R., Vlemmings, W. H. T., & Nyman, L.-Å. 2017, , 841, 110[Sánchez Contreras et al.(2017)]SanchezContrerasetal2017 Sánchez Contreras, C., Báez-Rubio, A., Alcolea, J., Bujarrabal, V., & Martín-Pintado, J. 2017, , 603, A67 [Schneider(2011)]Schneider2011 Schneider, J. 2011, EPSC-DPS Joint Meeting 2011, 3 [Schröder & Connon Smith(2008)]SchroderConnonSmith2008 Schröder, K.-P., & Connon Smith, R. 2008, , 386, 155[Schröder & Cuntz(2005)]SchroderCuntz2005 Schröder, K.-P., & Cuntz, M. 2005, , 630, L73[Siess & Livio(1999a)]SiessLivio1999a Siess, L., & Livio, M. 1999, , 304, 925 [Siess & Livio(1999b)]SiessLivio1999b Siess, L., & Livio, M. 1999, , 308, 1133 [Soker(1996)]Soker1996 Soker, N. 1996, , 460, L53[Soker(2004)]Soker2004IAUS Soker, N. 2004, Stars as Suns : Activity, Evolution and Planets, 219, 323[Soker & Subag(2005)]SokerSubag2005 Soker, N., & Subag, E. 2005, , 130, 2717[Sowicka et al.(2017)]Sowickaetal2017 Sowicka, P., Jones, D.,Corradi, R. L. M., Wesson, R.,Garcia-Rojas, J., Miguel Santander-Garcia, M., Boffin, H.  M. J.,&Rodriguez-Gil. P. 2017,arXiv:1706.08766[Staff et al.(2016)]Staffetal2016 Staff, J. E., De Marco, O., Wood, P., Galaviz, P., & Passy, J.-C. 2016, , 458, 832[Wittenmyer et al.(2012)]Wittenmyeretal2012 Wittenmyer, R. A., Horner, J., Tuomi, M., et al. 2012, , 753, 169 [Veras(2016)]Veras2016 Veras, D. 2016, , 463, 2958[Verbunt & Phinney(1995)]VerbuntPhinney1995 Verbunt, F., & Phinney, E. S. 1995, , 296, 709 [Villaver & Livio(2007)]VillaverLivio2007 Villaver, E., & Livio, M. 2007, , 661, 1192[Villaver & Livio(2009)]VillaverLivio2009 Villaver, E., & Livio, M. 2009, , 705, L81[Villaver et al.(2014)]Villaveretal2014 Villaver, E., Livio, M., Mustill, A. J., & Siess, L. 2014, , 794, 3[Zahn(1977)]Zahn1977 Zahn, J.-P. 1977, , 57, 383[Zahn(1989)]Zahn1989 Zahn, J.-P. 1989, , 220, 112
http://arxiv.org/abs/1706.08897v2
{ "authors": [ "Efrat Sabach", "Noam Soker" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170627145312", "title": "Accounting for planet-shaped planetary nebulae" }
unsrtnat./outdir=./ #1_#1
http://arxiv.org/abs/1706.08288v1
{ "authors": [ "A. F. Schäffer", "H. A. Dürr", "J. Berakdar" ], "categories": [ "physics.app-ph" ], "primary_category": "physics.app-ph", "published": "20170626085941", "title": "Ultrafast imprinting of topologically protected magnetic textures via pulsed electrons" }
[pages=1-last]IPDPS_2017_IEEE_camera_ready.pdf
http://arxiv.org/abs/1706.08884v1
{ "authors": [ "El Mahdi El Mhamdi", "Rachid Guerraoui" ], "categories": [ "stat.ML", "cs.DC", "cs.NE", "q-bio.NC" ], "primary_category": "stat.ML", "published": "20170627143109", "title": "When Neurons Fail" }
Was the Universe Actually Radiation Dominated Prior to Nucleosynthesis? Yue Zhao^3 December 30, 2023 =======================================================================§ A STORY Mr Holt, the kindergarten teacher, gives his class these instructions:Hello class, The Metropolitan Museum of Art has a sudden shortage of sculptures and needs several new ones to fill its shelves. Please break into groups so that each group can build a Lego tower. The director of the museum will be here in an hour to pick up the towers and put them in the museum with your names on them. Please do the best job you can; you don't want to be professionally embarrassed.Each kindergartener wants to be in a group with her friends, but she also wants her friends to be happy in the group; she doesn't want her friends to be miserable. The graph below is a map of who is friends with whom in the small class. Notice that a would have more friends in the group {a, b, c, d, e} than {a, b, c, d}, but maybe a doesn't want e to be in the group because a knows that would make b, c, and d less happy. Strangely, a prefers {a, b, c, d} to {a, b, c, d, e}.You can imagine that the kindergarteners might try to choose the best group in some other way. The class would split into groups one way, but then people would be unhappy and keep changing their groups. How can we model all this? How could we easily visualize all this? < g r a p h i c s > § HEDONIC GAMES Below is the original definition of a hedonic game. Hedonic games <cit.> were invented to model the formation and reformation of groups.<cit.> A coalition formation game is a pair G = (N, (≽_i)_i ∈ N), where N is a finite set of players and for every i ∈ N, ≽_i is a reflexive, complete, and transitive binary relation on i = {C ∈ 2^N : i ∈ C}. If C,D ∈i and C ≽_i D and D ⋡_i C, then we write C ≻_i D. <cit.> A coalition structure Γ = {C_1, …, C_k} is a partition of N. The coalition containing a player i ∈ N is denoted Γ(i). Any subset of N is called a coalition.That's a very minimal definition, and these most general hedonic games don't have many computationally useful properties. For that reason, several subclasses of hedonic games have been invented and studied. First though, let's look at stability. §.§ The Core If Mr Holt were assigning groups, instead of letting the kids form their own groups, then he might want a way to predict if a given partition will stick before he actually moves people around. “Will the students stay in their groups or will they form new ones?” There are many ways you can ask the question “Is this coalition formation stable?” Seven good ways are mentioned in <cit.>. One of the most important ways to ask the question (and the focus of the survey <cit.>) is “Is this this coalition formation core stable?”. In a hedonic game G with a partition Γ, if there is a nonempty set C ⊆ N where ∀ i ∈ C: C ≻_i Γ(i), then we say that C blocks Γ, or C is a blocking coalition in Γ. If Γ cannot be blocked, then it is called core stable. The set of core stable partitions for a game G is called the core of G. § VARIETIES OF HEDONIC GAMES In the below paragraphs, n = |N| is the number of players, i is a player in N, and C,D ∈i are coalitions which contain i. §.§ Fractional Hedonic Games <cit.> In fractional hedonic games, i assigns some real value v_i(j) to every player j ∈ N. It's assumed that v_i(i) = 0.[ Raising your own score is equivalent to lowering everyone else's score. Lowering your own score is equivalent to raising everyone else's score.] We say C ≽^_i D if u^_i(C) ≥ u^_i(D), whereu_i^(C) = ∑_j ∈ C v_i(j). A fractional hedonic game is called simple if ∀ i,j ∈ N: v_i(j) ∈{0,1} and is called symmetric if ∀ i,j ∈ N: v_i(j) = v_j(i).show that even in fractional hedonic games which are both simpleand symmetric, the core is sometimes empty and that checking core emptiness is Σ_2^p-complete. §.§ Friend and Enemy Oriented Hedonic Games <cit.> In both of these kinds of games, i splits the other players in N into a set of friends, F_i, and a set of enemies, E_i.In friend-oriented games, i prefers coalitions with more friends and breaks ties by considering the number of enemies. In other words,C ≽^_i D|C ∩ F_i| > |D ∩ F_i|   ( |C ∩ F_i| = |D ∩ F_i|    |C ∩ E_i| ≤ |D ∩ E_i| )u_i^(C) ≥ u_i^(D),u_i^(C) = n|C ∩ F_i| - |C ∩ E_i|.So if C has 8 of i's friends and 600 of i's enemies and D has 7 of i's friends and 0 of i's enemies, then i would still rather be in C.In enemy-oriented games, i tries to minimize enemies and only considers friends to break a tie. In other words,C ≽^_i D|C ∩ E_i| < |D ∩ E_i|   ( |C ∩ E_i| = |D ∩ E_i|    |C ∩ F_i| ≥ |D ∩ F_i| )u_i^(C) ≥ u_i^(D),u_i^(C) = |C ∩ F_i| - n |C ∩ E_i|.show that the core is guaranteed to be non-empty in both kinds of games. However, finding a core stable partition is NP-hard in enemy-oriented games[ More precisely, if you could always find a core stable coalition structure in polynomial time, then you could also find the largest clique in any (undirected, unweighted) graph in polynomial time.] but polynomial time in friend-oriented games. §.§ Altruistic Hedonic Games <cit.> As in friend and enemy oriented hedonic games, i divides the other players into friends, F_i, and enemies, E_i. The idea is that a player wouldn't want to be in a coalition C where his friends were miserable, even if C had all of his friends and none of his enemies.Three levels of altruism are considered. Let (S) = ∑_x ∈ S x / |S| denote the average of a multiset of numbers. And, as above, the utilities u_i are defined so that C ≽_i Du_i(C) ≥ u_i(D).In selfish-first altruistic games, a player cares most about his own happiness and uses his friends' preferences to break ties. `Happiness' here means the friend-oriented score.This is distinct from friend-oriented games in that a tightly connected coalition C with 6 friends and 3 enemies is preferred to a sparse coalition D with 6 friends and 3 enemies, because i's friends in C are happier than i's friends in D.u_i^(C) = n^5 u_i^(C) + (u_j^(C) : j ∈ C ∩ F_i).In equal-treatment altruistic games, a player takes his and all his friends' opinions into account equally when evaluating a partition:u_i^(C) = (u_j^(C) : j ∈ C ∩ F_i ∪{ı}).And in altruistic-treatment altruistic games (i.e., truly altruistic games), a player prefers coalitions where his friends are happy and breaks ties by considering his own happiness.u_i^(C) = u_i^(C) + n^5 (u_j^(C) : j ∈ C ∩ F_i).show that selfish-first altruistic games always have an nonempty core. Whether equal-treatment altruistic games and truly altruistic games ever have empty cores are open questions. I suspect that the core is always nonempty in both games.§ THE SIMULATOR I wrote software to simulate hedonic games and put in on the internet. You can draw graphs, choose partitions, choose several different player types, and check the stability of the partition under several different measures. Hopefully this will help others and myself quickly understand different hedonic games and speed up the process of finding stable partitions.<http://lukemiles.org/hedonic-games>< g r a p h i c s > < g r a p h i c s > < g r a p h i c s >The website works better on laptops than smartphones. Updates may have been made to the website since this arXiv version was uploaded.plainnat
http://arxiv.org/abs/1706.08501v2
{ "authors": [ "Luke Harold Miles" ], "categories": [ "cs.MA", "cs.AI" ], "primary_category": "cs.MA", "published": "20170626174732", "title": "A Simulator for Hedonic Games" }
Approximate Steepest Coordinate Descent (ASCD) pics/ FthmTheorem[section] lemLemma[section]propProposition defnDefinition[section]conditionConditionarg max arg minarg max arg min
http://arxiv.org/abs/1706.08427v1
{ "authors": [ "Sebastian U. Stich", "Anant Raj", "Martin Jaggi" ], "categories": [ "cs.LG", "math.OC", "G.1.6" ], "primary_category": "cs.LG", "published": "20170626150702", "title": "Approximate Steepest Coordinate Descent" }
Queueing Analysis of a Chagas Disease Control Campaign0.1in Maria T.Rieders [email protected] and Information Management Department, University of Pennsylvania, Philadelphia, PA, USA 0.1inPatrick [email protected] of Biology, School of Arts and Sciences, University of Pennsylvania, Philadelphia, PA, USA 0.1inMichael [email protected] Center for Clinical Epidemiology & Biostatistics, Department of Biostatistics & EpidemiologyUniversity of Pennsylvania School of Medicine, Philadelphia, PA, USA 0.1inA critical component of preventing the spread of vector borne diseases such as Chagas disease are door-to-door campaigns by public health officials that implement insecticide application in order to eradicate the vector infestation of households. The success of such campaigns depends on adequate household participation during the active phase as well as on sufficient follow-up during the surveillance phase when newly infested houses or infested houses that had not participated in the active phase will receive treatment. Queueing models which are widely used in operations management give us a mathematical representation of the operational efforts needed to contain the spread of infestation. By modeling the queue as consisting of all infested houses in a given locality, we capture the dynamics of the insect population due to prevalence of infestation and to the additional growth of infestation by redispersion, i.e. by the spread of infestation to previously uninfested houses during the wait time for treatment. In contrast to traditional queueing models, houses waiting for treatment are not known but must be identified through a search process by public health workers. Thus, both the arrival rate of houses to the queue as well as the removal rate from the queue depend on the current level of infestation. We incorporate these dependencies through a load dependent queueing model which allows us to estimate the long run average rate of removing houses from the queue and therefore the costassociated with a given surveillance program. The model is motivated by and applied to an ongoing Chagas disease control campaign in Arequipa, Peru.§ INTRODUCTION Control of Triatoma infestans, an insect vector of Trypanosoma cruzi which causes Chagas disease, is one of the greatest public health achievements of the past couple of decades. Residual insecticide application has greatly reduced the prevalence of the insect in areas with strong control programs. The insect however, possesses the potential to rebound from these concerted efforts. It has, on at least one opportunity, developed resistance to the commonly applied class of insecticides (the pyrethroids). The insect has also encroached into urban habitats.The size and complexity of cities possess numerous challenges to vector control. Insects can move easily between habitats in densely populated cities. Participation in control efforts is always partial, and non participating households can serve as foci for the return and re-dispersion of vectors following control efforts. The city of Arequipa, Peru, with a population of nearly one million is probably the largest city battling T. infestans. Insecticide application in the city began in 2003; to date over 80,000 households of the city have been treated.All but two of the affected districts of the city have passed from the attack phase of the campaign - during which insecticide is applied twice at a six month interval - into the surveillance phase. The surveillance phase relies on a combination of community reporting of returning insects and active search by trained entomological surveyors.In a traditional queueing model with one or more servers, customers arrive at a service facility according to some stochastic arrival process. If all servers are busy at time of arrival, the customer will join the end of the queue, waiting for his/her turn in service. Service may be given on a first come, first served basis, a last come, first served basis, by randomly selecting customers from the queue, or by some other selection policy. Classic queueing models include the M/G/1 queue where jobs arrive according to a Poisson process, are being served by a single server with the sequence of service times being independent and identically distributed (iid) random variables. When service times are required to be iid exponential, such a system is called an M/M/1 queue. In these standard systems, we assume that jobs joining the queue will not renege, and that all underlying distributions remain the same throughout time and are independent of each other. These basic systems are well understood in terms of performance measures such as waiting time of a job, throughput analysis or queue length distribution. For results on basic queueing systems, the reader may refer to Kleinrock <cit.> or Cooper <cit.>. In the context of our public health campaign, customers requiring service are houses that have been infested withTriatoma infestans, the disease carrying insect. Treatment of such a customer (house) consists of identifying an infested house and having it sprayed by exterminators with a pesticide that will eradicate the insect population on the premises. Thus, we imagine infested houses to be part of an imaginary queue where they will remain until they have been identified as hosting the Triatoma infestans and been treated by insecticides. We may also think of the houses in queue as invisible houses, recognizing the challenge for the public health workers to make them visible and then remove them from the queue. Our queueing model tries to capture the resources needed to deal with infested houses during the surveillance phase. For describing the arrivals to our imaginary queue, we note that houses currently in the queue may have become infested - and thus joined the queue - in two different ways: The insect vector may have been brought into a household from outside the neighborhood. Furthermore, any of the infested houses in the queue is capable of infesting other houses by spreading the vector into previously uninfested houses; the rate of this secondary infestation depends on how long a house has been infested. Thus, the arrival rate of houses is composed of an external rate plus a redispersion rate that depends on how long houses remain in the queue. In order to remove a house from the queue, public health officials need to identify that a particular house is infested. This will be accomplished by a multi-armed bandit algorithm that searches for infested houses based on historical data and continuously updated GIS maps; see <cit.>. The total service time, i.e. the time to remove a house from the queue, consists of the time until an infested house has been identified plus the time required for actual treatment with pesticides. The connection between queueing models and epidemic models has been addressed in the literature by Trapman and Bootsma <cit.> and Hernández-Suarez et al. <cit.>.<cit.> uses results about a classic queueing system (the M/G/1 queue with processor sharing) to estimate the distribution of the number of infectives at the moment of first detection in a stochastic epidemic model of SIR type, where SIR stands for Susceptible-Infectious-Removed/Re­covered individuals. <cit.> study SIS (Susceptible-Infected-Susceptible) and SEIS (Susceptible-Exposed-Infected-Susceptible) epidemic models and find the distribution of the disease while it is in the endemic state by using the classic M/G/N queueing system. Kaplan et al. <cit.> built a trace-vaccination queue to model operational requirements for dealing with a smallpox epidemic. Lee et al. <cit.> incorporate a disease propagation model with a vaccine queueing model in order to develop prioritization rules for the use of limited vaccines available for containing a pandemic. Both <cit.> and <cit.> use systems of ordinary differential equations for analyzing the trajectory of a pandemic outbreak over a finite time horizon. In contrast, our work is concerned with analyzing the continuing surveillance of infestation over a long period of time.In particular, our model will enable public health officials to estimate long run operational efforts and therefore the cost of maintaining a public health campaign which is charged with containing a prevalent infestation. In Section 2, we will present the queueing model in detail, including how to estimate the redispersion rate as a function of the cumulative prevalence of infestation in the area under consideration. We will introduce the concept of total workload of a queueing system and formulate both the arrival rate as well as the removal rate of houses as functions of the workload. Section 3 presents the long run analysis of such a load dependent queueing system, including an algorithm for calculating the operational throughput and the average cost per time period for treating infested houses during the surveillance phase. Several special cases that allow for closed form or simplified algorithmic calculation are considered. Computational results are based on field work in Arequipa, Peru and illustrate the implementation and data requirements for our analysis. In Section 4, we summarize our contributions and describe future research directions. § THE QUEUEING MODELIn the following, we describe the basic mechanics and assumptions of our queueing model. Note that the classic operations management literature use the words customers or jobs for entities flowing through a queueing system. In our context, customers are infested houses; the arrival of a customer corresponds to a house becoming infested and a service time completion is equivalent to identification and treatment of an infested house.In other words, our model envisions all currently infested houses as having joined an imaginary queue at the time of their initial infestation. A house will remain in queue until its infestation has been discovered and verified by a public health inspector and the house has been subsequently treated with an insecticide that will eliminate infestation and prevent further spread of infestation form this house. A schematic representation of this model is given in Figure<ref>.The next subsection will state the technical assumptions that we impose on the system.Figure <ref> shows a typical sample path of a queueing system where jobs arrive at times τ_1,τ_2,… to the system. The i^ th arriving job may have to wait an amount of time W_i before receiving a service time S_i. Upon completion of the service time, job i will then leave the system. §.§ Assumptions of the Queueing ModelInfestation of households with the Triatoma infestans typically happens either by the introduction of the insect vector from the outside or from insects spreading from infested properties to noninfested households. We assume that arrivals from the outside occur according to a Poisson process with a flat rate α>0. In addition to imported infestations from outside our geographic area, we note that the insects living in an infested house will spread their colonies to neighboring properties, thus adding previously uninfested houses to the queue. We call the rate of this spread of infestation the redispersion rate β and note that β depends on how many households are currently in the queue and how long each of them has been waiting for treatment. Hence, the total arrival rate is given byλ(x)= α+β(x) where x stands for the current occupancy state of the system. In a very detailed model, x would include information on the number of infested houses as well as the duration of infestation for each one of these houses since the geographic spread of insects is a function of time. We will propose to use an aggregate measure instead, namely the virtual load; see Subsection 2.1 below. Houses are removed from the queue through a process consisting of search, identification, and treatment. When only a few houses are infested, i.e., when the level of infestation is low, the search to identify infested houses will take longer than during times of more severe infestation. We therefore assume that the removal rate r(x) at which houses are identified and treated from the queue is also dependent on the current state x of the system (analogous to the redispersion rate function β(x)). Note that the exogenous arrival process with rate α is independent of the redispersion arrival process with rate β(x). Thus, houses join the queue at a composite rate ofλ=α+β.§.§ The Redispersion Rate FunctionWe estimate the redispersion rate function based on empirical work on the dynamics of the insect population growth as a function of time. Rabinovich <cit.> performs a statistical evaluation of the population dynamics of Triatoma infestans, the insect responsible for carrying the Chagas disease parasite. Based on Rabinovich's work, lab experiments, and the empirical data collected in Arequipa, Barbou et al. <cit.>, <cit.>, and <cit.> conceptualized the dispersal of the insect through the city as a series of "hops", "skips" and "jumps". Using simulated data and longitudinal data from sequential surveys of numerous localities across Arequipa they were able to make inference on how the bug moves through the city. They found that, on average, it takes 2 years for an infested household to successfully infest an additional household. This slower than expected rate is good news as it means that public health inspectors have some time to detect and eliminate new infestations, and partially explains why such an imperfect control campaign has been extremely successful. Based on their work, we can estimate the additional number of houses that will be infested by one untreated house to follow a logistic curve as a function of time since infestation; see Figure <ref>.Let us denote the infectivity function of Figure <ref> by γ(t). Note that our queueing model requires us to estimate the redispersion function β as a function of the overall state of the system.For instance, if at time t, there are n houses in queue, with respective arrival epochs τ_1,τ_2,…, τ_n, then each of these houses will reinfest other houses at rate γ(t-τ_i), i=1,…,n.Assuming no overlap in the additional infestations, this would result in a cumulative redispersion rate β(x)≈∑_i=1^n γ(t-t_i).Note, however, that the analytical model does not keep track of the actual arrival times of the houses currently in the queue nor the current queu length. In the following, we therefore descibe a way to approximate β as a function of the total work in the system. In queueing systems, the total work, also called workload V(t) refers to the sum of all service times of the jobs currently waiting in queue plus the remaining service time of jobs currently in service. In a single server queueing system, the total work is also referred to as the virtual wait. We propose to use the workload as a surrogate for the state x of the system. In particular, we express the redispersion rate β as a function of the current workload. That is, we assume that we know the virtual loadx, but we do not know the current number n of infested houses. From Little's law (see e.g., <cit.>) we know that the average number of jobs in a queue equals the average arrival rate times the average time spent in queue. Hence, we approximate the number of houses in the system n byn≈λ x.Since the time τ-t_i refers to the age of the infestation of house i and we consider the queueing system in the long run, we will treat it like the long run expected age of a renewal process with renewal intervals of size x, that is z^2/2z, obtaining the following approximation for the redispersion rate ββ(z)≈λ z γ(z/2).We note that the factor λ is the long run average arrival rate of houses to the queue, a quantity that we do not know a priory as it depends on the stationary flow through the system through the redispersion feedback look. We know that the approximation (<ref>) will tend to overestimate the new infestations since currently infested houses may be clustered together, thus reducing the number of most likely candidates for reinfestation that are surrounding an existing insect colony. On the other hand, the rate λ itself is bounded from below by α. Thus, as a first approximation of β, we propose to useβ(x)≈α z γ(z/2)and for the overall arrival rateλ(x)≈α+α z γ(z/2).The suitability of these approximations is currently being investiated through simulation experiments. §.§ The Treatment Rate Successful treatment of infested households and therefore the removal of that household from our queue does require identification of infested house by trained inspectors. The public health campaign in Arequipa has collected data over years that allows the ranking of houses based on risk factors for reinfestation. Billig <cit.> has developed a Baysian algorithm that updates these risk maps continuously using new data from field workers as they embark on daily searches. Thus, field workers have access to ranked lists of most likely infested houses and may choose to visit homes based on the given rankings. If the search is successful in identifying an infested house, treatment is scheduled for that property and the house thus removed from the queue. Currently, these search algorithms are being simplified and made available on mobile phones through an app. Regardless of which particular algorithm is employed, it is clear that, in general, a successful search for and treatment of an infested house will depend on the prevalence of infestation. A geographic area which contains only a few infested households is likely to require a longer search for vector colonies while a heavily infested area will typically require less effort in locating infestations. Therefore, we assume in our queueing model that the removal rate r(x) will depend on the level of infestation. Just as with the redispersion rate, we take the current workload x as a proxy for the level of infestation. Concrete forms for the function r(x) will have to be determined based on the specific search algorithm employed. In our analysis below, we focus on a few basic functional forms that give us some insight into the long run effort required for containing the spread of infestation.§.§ A Load Dependent Queueing ModelQueues with workload dependent removal rates have originally been modeled like the storage of water in dams or reservoirs; see for instance Asmussen <cit.>. Water flows into a reservoir from one or several water sources and is released at a rate r(x) depending on the present content of water x in the reservoir. Hence, if r(x)=1, the workload decreases at rate -1 as long as work is in the system. If the workload at time t_0 is x and the next arrival is at time t_1>t_0, the workload process during the interval (t_0,t_1) can be characterized as V(t_0+t)=x-∫_t_0^t_0+tr(s)ds.In the following, we will use an M/G/1 type queueing model with workload dependent arrival and service rates as analyzed by Bekker <cit.>. When the current workload of the system is x, then customers arrive according to a Poisson process with rate λ(x); i.e., the time until the next arrival A after current time t_0 is distributed according to P[A>t]=e^-∫_0^t λ(V(s)ds. The function λ(·) is assumed to be nonnegative and left continuous. Each arrival to the system adds a service time S to the existing workload. We assume that the sequence of service times S_1, S_2, S_3, … are independent and identically distributed (iid) random variables with cumulative distribution function F(·) and mean E[S]. While service times are iid, the rate r(x) at which service is delivered is assumed to be dependent on the current workload in the system. That is, between arrivals we have dV(t)/dt=-r(V(t)). We assume that r(0)=0 and that r(·) is strictly positive and left-continuous. We may think of the sequence {S_1, S_2, S_3, …} as nominal service times that would be required to serve each customer (house) under some normal condition. The effect of longer search times due to sparser or less severe infestation is being modeled by the service rate being slower than for a situation when infestation is less sparse or more severe. For this reason we assume that the function r(x) is increasing in x. § ANALYSIS OF THE LOAD DEPENDENT QUEUEING MODELOur analysis of the load dependent queueing system presented in the previous section draws heavily on the results derived in Bekker <cit.> and references therein. We first present these analytical results in a general setting and then empirical findings in the context our Chagas disease public health campaign.§.§ General Analytical ResultsSince our interest lies in the stationary analysis of the queueing system, we start with stating conditions on arrival and removal rate functions that guarantee stability. The results on stationary workload are based on level crossing arguments and can be found in <cit.>.The queueing system is stationary if lim sup_x→∞ E[S] λ(x)/r(x)<1. We now assume stationarity and consider the steady-state random variables V denoting the workload and W denoting the workload immediately before an arrival epoch. Let v(·) and w(·) be the probability density function of V and W, respectively, and V(·) and W(·) their cumulative distribution functions. The workload density v(·) exists and satisfies the equation r(x)v(x)=λ(0)V(0)(1-F(x))+∫_y=0^+^x (1-F(x-y))λ(y)v(y)dy,x>0.Note that knowledge of the workload density allows us to calculate the long run average arrival rate λ̅ to the system, i.e., the average number of houses infested and treated per year.The law of total probability gives us the following result:λ̅=∫_0^+^∞λ(x) v(x) dx +λ(0)V(0).For a few select cases, the expressions in <ref> can be simplified as follows. (i) Consider a load dependent M/G/1 Queue with arrival rate proportional to service rate, i.e.,λ(x)=Cr(x). In this case, r(x)v(x) equals the virtual load v(x) of an M/G/1 queueing system with arrival rate C and service speed 1. (ii) If we assume a load dependent M/M/1 queue with general arrival rate and service rate function, i.e., if both interarrival times and service times are exponential, albeit with rates that depend on the current workload x, then we can solve the differential equation in Theorem <ref> and obtain the density function v(x) as follows:v(x)=λ(0)V(0)/r(x) exp{∫_0^x ( λ(y)/r(y)-μ)dy}. (iii) A further simplification of the system in part (ii) to an M/M/1 queue with load dependent arrival rate function λ(x) and constant service rate μ results in the queueing model studied by Brill in <cit.>. Brill's paper offers a simple computational algorithm for calculating the average throughput λ̅ based on level crossing arguments.§.§ Computational ResultsWe currently are implementing a simulation study to test the appropriateness of the redispersion function as estimated by Equation (<ref>). Subsequently, we will conduct a suite of computational experiments, demonstrating the power of investigating the yearly cost of the public health campaign. In particular, we propose to investigate the following questions. * How many houses will require treatment per year when the campaign applies a certain search strategy?* Could additional investment into search procedures result in lower annual cost for the campaign due to fewer redispersions?* What is the minimum effort required (in terms of search success rates) in order to contain the spread of infestation at a manageable level? § CONCLUDING REMARKSOperational insights gained by applying queueing theory is a novel approach for studying the effectiveness and cost of a public health campaign that is focused on the long term surveillance of a geographic area. Through the rich data collection that the team in Arequipa, Peru has acquired over the years, we have gained fairly sophisticated understanding of the dynamics of the spread of the insect vector Triatoma infestans. This work is a first attempt to combine these dynamics with the operational effects of a public health team charged with searching for and eradicating residual infestations. In future work on this project, we plan to focus on better models for the removal rate r(x) as we gain more insights into the actual performance of various search strategies in the field.99Asmu2000 Asmussen, S., Applied Probability and Queues, 2nd Edt., Springer, 2000.Barbu2013 Barbu, C.M., A. Hong, J.M. Manne, D.S. Small, J.E. Quintanilla Calderón, K. Sethuraman, et al., The Effects of City Streets on an Urban Disease Vector, PLoS Comput Biol 9 (1): e1002801, 2013.Barbu2014 Barbu, C.M., Buttenheim A.M., Hancco Pumahuanca ML, Quintanilla Calderón JE, Salazar R, Carrión M, et al., Residual infestation and recolonization during urban Triatoma infestans bug control campaign, Peru, Emerging Infectious Disease, 20, 12, 2055-2063, 2014.Barbu2015 Castillo Neyra, R, Barbu CM, Salazar R, Borrini K, Naquira C, Levy MZ, Host-seeking behavior and dispersal of Triatoma infestans, a vector of Chagas disease, under semi-field conditions, PLOS Neglected Tropical Diseases, 9 (1), e3433, 2015. Bekker2004 Bekker, R., S.C. Borst and O.J. Boxma, Queues with Service Dependent Arrival and Service Rates, Queueing Systems, 46, 537-556, 2004.Billig2017 Billig, E, Dissertation, University of Pennsylvania, 2017.Brill1988 Brill, P. H., Single-Server Queues with Delay-Dependent Arrival Streams, Probability in the Engineering and Informational Sciences, 2, 1988, 231-247.Cooper1981 Cooper, R.B. Introduction to Queueing Theory, 2nd Edition, North-Holland (Elsevier), 1981.Gutfraind2015 Gutfraind, S. and M.Z. Levy, Applying Multi-armed Bandit Strategies in Entomological Search, Working Paper (2015)Hagen2013 Hagen, M.S., J.K. Jopling, T.G. Buchman, E.K. Lee, Priority Queuing Models for Hospital Intensive Care Units and Impacts to Severe Case Patients,AMIA Annu Symp Proc. 2013; 2013: 841–850. Published online 2013 Nov 16.Hernandez-Suarez2010 Hernández-Suárez, C., C. Castillo-Chavez, O. Montesinos L'opez and K. Hernández-Cuevas, An Application of Queueing Theory to SIS and SEIS Epidemic Models, Mathematical Biosciences and Engineering, 7, 2010, 809-823.Kaplan2003 Kaplan, E.H., D.L. Craft, and L.M. Wein, Analyzing Bioterror Response Logistics: The Case of Smallpox,Mathematical Biosciences, 185, 2003, 33-72.KleinrockI1975 Kleinrock, L., Queueing Systems - Volume I: Theory, Wiley, 1975. LeeInterfaces2015 Lee, E.K., F. Yuan, F.H. Pietz, B.A. Benecke, G. Burel (2015) Vaccine Prioritization for Effective Pandemic Response, Interfaces, 45(5), 2015, 425-443.Levy2010 Levy MZ, Malaga F, Cornejo del Carpio JG, Vilhena D, McKenzie FE, Plotkin JB: Rational spatio-temporal strategies for Controlling a Chagas Disease Vector in Urban Environments. Journal of the Royal Society Interface 7(48): 1061-70, Jul 2010 Notes: http://rsif.royalsocietypublishing.org/content/7/48/1061.long.Rabinovich1972 Rabinovich, J.E., Vital Statistics of Triatominae (Hemiptera: Reduviidae) Under Laboratory Conditions, J. Med. Ent., 9, 1972, 351–370.Trapman2009 Trapman, P., M. C. J. Bootsma, A Useful Relationship Between Epidemiology and Queueing Theory: The Distribution of the Number of Infectives at the Moment of the First Detection, Mathematical Biosciences, 219, 2009, 15-22.Wolff1989 Wolff, R.W., Stochastic Modelling and the Theory of Queues, Prentice Hall, 1989. AcknowledgementThe authors acknowldege programming assistance by Karthik Sethuraman (former student at the Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA) for initial work on this project. We also thank the following organizations for their part in organizing and conducting the Chagas Disease control campaign in Arequipa: Ministerio de Salud del Perú (MINSA), the Dirección General de Salud de las Personas (DGSP), the Estrategia Sanitaria Nacional de Prevención y Control de Enfermedades Metaxénicas y Otras Transmitidas por Vectores (ESNPCEMOTVS), the Dirección General de Salud Ambiental (DIGESA), the Gobierno Regional de Arequipa, the Gerencia Regional de Salud de Arequipa (GRSA), the Pan American Health Organization (PAHO/OPS) and the Canadian International Development Agency (CIDA). This work was supported by National Institutes of Health grants NIH-NIAID R01AI101229 and R01HD075869.
http://arxiv.org/abs/1706.08668v1
{ "authors": [ "Maria T. Rieders", "Patrick Emedom-Nnamdi", "Michael Z. Levy" ], "categories": [ "q-bio.QM" ], "primary_category": "q-bio.QM", "published": "20170627045811", "title": "Queueing Analysis of a Chagas Disease Control Campaign" }
-.6in -.6in 1.25in 1.35in -.7inequationsection plain theoremTheorem[section] lemma[theorem]Lemma prop[theorem]Proposition corr[theorem]Corollary definition exampleExample[section] defnDefinition[section] assAssumption test[theorem]Testremark[1][Remark.]#1 kern[1]#1@kerna chicago #1 1 0Intrinsic data depth for Hermitian positive definite matrices Joris Chau[ Corresponding author, [email protected], Institute of Statistics, Biostatistics, and Actuarial Sciences, Université catholique de Louvain, Voie du Roman Pays 20, B-1348, Louvain-la-Neuve, Belgium.],Hernando Ombao[ Department of Statistics, University of California at Irvine, Bren Hall 2206, Irvine, CA, 92697, United States. Department of Applied Mathematics and Computational Science, King Abdullah University of Science and Technology, Thuwal 23955-6900, Saudi Arabia. ]and Rainer von Sachs[ Institute of Statistics, Biostatistics, and Actuarial Sciences, Université catholique de Louvain, Voie du Roman Pays 20, B-1348, Louvain-la-Neuve, Belgium. ] ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== 1 Intrinsic data depth for Hermitian positive definite matricesNondegenerate covariance, correlation and spectral density matrices are necessarily symmetric or Hermitian and positive definite. The main contribution of this paper is the development of statistical data depths for collections of Hermitian positive definite matrices by exploiting the geometric structure of the space as a Riemannian manifold. The depth functions allow one to naturally characterize most central or outlying matrices, but also provide a practical framework for inference in the context of samples of positive definite matrices. First, the desired properties of an intrinsic data depth function acting on the space of Hermitian positive definite matrices are presented. Second, we propose two computationally fast pointwise and integrated data depth functions that satisfy each of these requirements and investigate several robustness and efficiency aspects. As an application, we construct depth-based confidence regions for the intrinsic mean of a sample of positive definite matrices, which is applied to the exploratory analysis of a collection of covariance matrices associated to a multicenter research trial. Keywords: Data depth, Hermitian positive definite matrices, Riemannian manifold, Confidence regions, Affine-invariant metric, Covariance matrices.1.45 § INTRODUCTION In numerous applications in multivariate statistics, we are interested not only in the first-order behavior (mean) of a sample of random vectors, but also in the second-order behavior or variability of the sample. In fact, our primary interest is often precisely the analysis of covariance or correlation structures between components of the random vectors. In many areas of statistical research, such as neuroscience, biomedical science, environmental science, demographics or finance, it is increasingly common to encounter covariance or correlation matrices across a large number of temporal or spatial locations, or across a large number of replicated subjects or trials in an experiment. In this work, our aim is to develop data exploration and inference tools for large collections or samples of such matrices.The data objects of interest, nondegenerate covariance or correlation matrices, are necessarily elements of the space of Hermitian positive definite (HPD) matrices, which includes the space of symmetric positive definite (SPD) matrices in the real-valued case. The space of HPD matrices, although very well-structured, is inherently non-Euclidean and standard Euclidean-based statistical procedures (e.g., regression, clustering or inference procedures) may be unstable or break down due to the geometric constraints of the space. For this reason, it is necessary to generalize statistical procedures for data in the space of symmetric or Hermitian PD matrices, taking into account the non-Euclidean geometry of the space. Several recent works addressing this issue include: <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> <cit.>, <cit.> and <cit.> among others. The main contribution of this paper is the generalization of notions of data depth for samples of HPD matrices to provide a center-to-outward ordering of positive definite matrix-valued objects. Data depth is a useful tool for data exploration to identify most central or outlying data observations (as in <cit.> or <cit.> in a Euclidean context); or as a means of inference, by way of rank-based hypothesis testing (as in <cit.>, <cit.>, or <cit.>), classification (see e.g., <cit.>) or the construction of confidence regions (see e.g., <cit.>) among other applications. Although many different depth functions have been proposed and studied in the literature over the years, most data depth functions are constructed in the first place for vector-valued observations in the Euclidean space ℝ^d. Exceptions include <cit.>, where the authors consider depth functions for directional data on circles or spheres; <cit.>, on projection depth for tensor objects; or the recent work, <cit.>, on halfspace depths for scatter, concentration and shape matrices. For an overview of various Euclidean data depth functions and their specific properties, we refer the reader to e.g., <cit.>, <cit.>, or <cit.>. The space of (d × d)-dimensional Hermitian (not necessarily PD) matrices (ℍ_d × d, +, ·_S) together with matrix addition and matrix scalar multiplication is a real vector space, and each Hermitian matrix bijectively maps to a vector in ℝ^d^2 by expanding the matrix with respect to some basis. To calculate data depth values for a sample of Hermitian matrices, it suffices to apply any ordinary Euclidean data depth function to the basis component vectors of the Hermitian matrices, given that the computed depth values do not depend on the chosen basis. In contrast, due to the nonlinear positive definite constraints, the space of HPD matrices (ℙ_d × d, +, ·_S) is not a vector space. Moreover, the cone of HPD matrices embedded in a Euclidean space endowed with the Eulidean metric is not a complete metric space. As a consequence, Euclidean data depth applied to a sample of HPD matrices violates the basic properties of a proper depth function. To illustrate, according to <cit.>, a proper depth function should be monotonicallly non-increasing moving outwards from a well-defined center. Moving away from a central point along a straight line is not always well-defined in the cone of HPD matrices, as the boundary of the space lies at a finite distance. Also, pointwise or uniform continuity properties of the data depth functions fail to hold due to the incompleteness of the metric space. Instead of embedding the space of HPD matrices in an ambient Euclidean space, we exploit the geometric structure of the space of HPD matrices as a curved Riemannian manifold equipped with the affine-invariant (<cit.>) –also natural invariant (<cit.>), canonical (<cit.>), trace (<cit.>), Rao-Fisher (<cit.>)– Riemannian metric, or simply the Riemannian metric (<cit.>, <cit.>). The affine-invariant metric plays an important role in estimation problems in the space of symmetric or Hermitian PD matrices for several reasons: (i) the space of HPD matrices equipped with the affine-invariant metric is a complete metric space, (ii) the affine-invariant metric is invariant under congruence transformation by any invertible matrix, see Section <ref>, and (iii) there is no swelling effect as with the Euclidean metric, where interpolating two HPD matrices may yield a matrix with a determinant larger than either of the original matrices, which may lead to computational instability, (<cit.>). The first property allows us to construct proper data depth functions in the space of HPD matrices satisfying all of the intrinsic versions of the axiomatic properties in <cit.>. The second property is important to ensure that the depth functions are general linear congruence invariant, which in practice means that the depth values do not non-trivially depend on the chosen coordinate system of the data. In <cit.>, the authors list several additional metrics for estimation problems in the space of HPD matrices, such as the Log-Euclidean metric, also studied in <cit.>. The Log-Euclidean metric transforms the space of HPD matrices into a complete metric space and is invariant under congruence transformations by the unitary group, but not by the general linear group, as is true for the affine-invariant metric.In the preliminary Section <ref>, we introduce the necessary geometric tools to develop data depths acting directly on the space of HPD matrices as a geodesically complete manifold. In Section <ref>, we present the desired properties an intrinsic depth function should satisfy, and we propose two data depth functions that satisfy each of these requirements. In addition, we consider integrated depth functions that act on curves of HPD matrices, such as spectral density matrices. In Section <ref>, we compare the two depth functions in terms of robustness and efficiency aspects. In Section <ref>, as an application of the depth functions, we construct depth-based confidence regions for the intrinsic mean of a sample of HPD matrices, and in Section <ref> we apply the intrinsic depth functions to explore a collection of covariance matrices from a multicenter clinical trial. The technical proofs and additional figures can be found in the supplementary material. The accompanying -code, containing the necessary tools to compute the intrinsic data depths and to perform rank-based hypothesis testing for samples of HPD matrices, is publicly available in the -packageon CRAN, (<cit.>). § PRELIMINARIES§.§ Geometry of HPD matricesIn order to develop data depths for observations in the space of HPD matrices, we study the space as a Riemannian manifold as in <cit.>, <cit.>, or <cit.> among others. Denote ℳ := ℙ_d × d for the space of (d × d) HPD matrices. ℳ is an open subset of the space (d × d) Hermitian matrices ℋ := ℍ_d × d, and as such a smooth manifold. The tangent space T_p(ℳ) at a point (i.e., a matrix) p ∈ℳ can be identified by the Hermitian space ℋ, and the Frobenius inner product on ℋ induces the affine-invariant Riemannian metric g_R on the manifold ℳ given by the smooth family of inner products:⟨ h_1, h_2 ⟩_p= ((p^-1/2∗ h_1)(p^-1/2∗ h_2)), ∀p ∈ℳ,with h_1,h_2 ∈ T_p(ℳ). Here and throughout this paper, y^1/2 always denotes the Hermitian square root matrix of y ∈ℳ, and we write y ∗ x := y^* x y for matrix congruence transformation, where ^* denotes the conjugate transpose of a matrix. The Riemannian distance δ_R on ℳ derived from the affine-invariant Riemannian metric is given by:δ_R(p_1,p_2)= ‖(p_1^-1/2∗ p_2) ‖_F,where ‖·‖_F denotes the matrix Frobenius norm and (·) is the matrix logarithm. Denote the general linear group by GL(d, ℂ) := { a ∈ℂ^d × d: (a) ≠ 0 }. The mapping x ↦ a ∗ x is an isometry for each invertible matrix a ∈GL(d,ℂ), i.e., it is distance-preserving:δ_R(p_1, p_2)= δ_R(a ∗ p_1, a ∗ p_2),∀a ∈GL(d, ℂ).By <cit.>, the Riemannian manifold (ℳ, g_R) is geodesically complete. By the Hopf-Rinow Theorem this implies that there exists a unique geodesic segment joining any two points p_1,p_2 ∈ℳ and every geodesic can be extended indefinitely. The Hopf-Rinow Theorem also implies that for every p ∈ℳ the exponential map _p and the logarithmic (i.e., inverse exponential) map _p are global diffeomorphisms with domains T_p(ℳ) and ℳ respectively. By (<cit.>), the exponential _p: T_p(ℳ) →ℳ and logarithmic _p : ℳ→ T_p(ℳ) maps are given by,_p(h)=p^1/2∗(p^-1/2∗ h ),_p(q)=p^1/2∗(p^-1/2∗ q ),where (·) denotes the matrix exponential. The Riemannian distance may now also be expressed in terms of the logarithmic map as:δ_R(p_1, p_2) = ‖_p_1(p_2) ‖_p_1= ‖_p_2(p_1) ‖_p_2, ∀p_1, p_2 ∈ℳ,where throughout this paper ‖ h ‖_p := ⟨ h, h ⟩_p denotes the norm of h ∈ T_p(ℳ) induced by the affine-invariant metric.As there exists a unique geodesic curve connecting any two points p_1, p_2 ∈ℳ, geodesically convex sets are well-defined. A subset 𝒦⊆ℳ is said to be convex or geodesically convex if for each pair of points p_1,p_2 ∈𝒦, the geodesic segment [p_1, p_2] is contained entirely in 𝒦. If 𝒮⊆ℳ, then the convex hull of 𝒮, denoted by conv(𝒮), is the smallest convex set containing 𝒮. This set is conveniently expressed as,conv(𝒮) := { p ∈𝒮: p = _p(∫_𝒮_p(x) w(x) λ(dx) ), w: 𝒮→ [0,1], ∫_𝒮 w(x) λ(dx) = 1 }, where λ is the Lebesgue measure on the finite-dimensional metric space (ℳ, δ_R) and w is a measurable function. For more details on the construction of (approximate) convex hulls on the manifold ℳ, we refer to <cit.>. §.§ Probability distributions and random variablesA random variable X: Ω→ℳ on the Riemannian manifold (ℳ, g_R) is a measurable function from some probability space (Ω, 𝒜, ν) to the measurable space (ℳ, ℬ(ℳ)), where ℬ(ℳ) is the Borel algebra, i.e., the smallest σ-algebra containing all open sets in (ℳ, g_R). In the following, we always work directly with the induced probability on ℳ, ν(B) = ν({ω∈Ω: X(ω) ∈ B }). By P(ℳ), we denote the set of all probability measures on (ℳ, ℬ(ℳ)) and P_p(ℳ) denotes the subset of probability measures in P(ℳ) that have finite moments of order p with respect to the Riemannian distance, i.e., the L^p-Wasserstein space <cit.>:P_p(ℳ):= {ν∈ P(ℳ) : ∃y_0 ∈ℳ, s.t.∫_ℳδ_R(y_0, x)^p ν(dx) < ∞}.Note that if ∫_ℳδ_R(y_0, x)^p ν(dx) < ∞ for some y_0 ∈ℳ and 1 ≤ p < ∞, this is true for any y ∈ℳ. This follows by the triangle inequality and the fact that δ_R(p_1, p_2) < ∞ for any p_1,p_2 ∈ℳ, as ∫_ℳδ_R(y, x)^p ν(dx) ≤ 2^p ( δ_R(y, y_0)^p + ∫_ℳδ_R(y_0, x)^pν(dx) ) < ∞. For a sequence of probability measures (ν_n)_n ∈ℕ in P(ℳ), ν_n w→ν denotes weak convergence to the probability measure ν in the usual sense, i.e., ∫_ℳϕ(x) ν_n(dx) →∫_ℳϕ(x) ν(dx) for every continuous and bounded function ϕ: ℳ→ℝ, and a sequence (ν_n)_n ∈ℕ is said to be uniformly integrable if lim_K →∞sup_n ∈ℕ∫_ℳδ_R(y_0, x) 1_{δ_R(y_0, x) > K } ν_n(dx) = 0 for some y_0 ∈ℳ. Note that if (ν_n)_n ∈ℕ is uniformly integrable for some y_0 ∈ℳ, then the sequence is uniformly integrable for any y ∈ℳ. Finally, we use the notation conv(ν) := conv(supp(ν)) for the convex hull of the support of the measure ν on ℳ, and rint(conv(ν)) and rδ(conv(ν)) for its relative interior and relative boundary. §.§ Measures of centrality Intrinsic mean. To characterize the center of a random variable X with probability measure ν, one important measure of centrality is the Karcher or Fréchet mean, which is also referred to as the intrinsic mean as it is intrinsic to the Riemannian distance measure on the manifold. The intrinsic mean turns out to be the point of maximum depth in the intrinsic zonoid depth introduced in Section <ref>. The set of intrinsic means consists of the points that minimize the second moment with respect to the Riemannian distance, μ = 𝔼_ν[X]:=min_y ∈supp(ν)∫_ℳδ_R(y,x)^2 ν(dx).If ν∈ P_2(ℳ), then at least one intrinsic mean exists as the above expectation is finite for y ∈ℳ. Moreover, since the manifold ℳ is a geodesically complete manifold of non-positive curvature (see <cit.> or <cit.>), by <cit.> the intrinsic mean μ is unique for any distribution ν∈ P_2(ℳ). By <cit.>, the intrinsic mean is also represented by the point μ∈ℳ that satisfies,E_ν[_μ(X)]= 0,where 0 is the zero matrix. The sample intrinsic mean of a set of manifold-valued observations minimizes a sum of squared Riemannian distances and can be computed efficiently through a gradient descent algorithm as in <cit.>.Intrinsic median. A second measure of centrality of primary interest is the intrinsic median as in <cit.>, which is the point of maximum depth in the geodesic distance depth defined in Section <ref>. The set of intrinsic medians minimizes the first moment with respect to the Riemannian distance,m=GM_ν(X):= min_y ∈supp(ν)∫_ℳδ_R(y, x)dν(x).On (ℳ, δ_R), a geodesically complete manifold with non-positive curvature, the intrinsic median exists and is unique for any distribution ν∈ P_1(ℳ). This follows by the proof of <cit.> combined with an application of Leibniz's integral rule. Furthermore, the intrinsic median is uniquely characterized by the point m ∈ℳ that satisfies, E_ν[ _m(X)/δ_R(m, X)]= 0. If the distribution ν of a random variable X is centrally symmetric around μ∈ℳ in the sense that _μ(X) d= - _μ(X), then the intrinsic mean and median coincide and are equal to μ. Here, equality in distribution (d=) is read as equality in terms of the joint distribution of all matrix components. The claim for the intrinsic mean follows by the fact that E_ν[_μ(X)] = 0, which implies that μ is the intrinsic mean of the random variable X. For the intrinsic median, if X is centrally symmetric around μ, then X is also angularly symmetric around μ in the sense that _μ(X)/‖_μ(X) ‖_μd= - _μ(X)/‖_μ(X) ‖_μ. Substituting ‖_μ(X) ‖_μ = δ_R(μ, X), we observe that E_ν[_μ(X)/δ_R(μ, X)] = 0, which implies that μ is also the intrinsic median of the random variable X. § DATA DEPTH FOR HPD MATRICES Before introducing the manifold data depth functions, we present the desired properties a proper intrinsic data depth function –acting directly on the space of HPD matrices– should satisfy. These requirements are the natural intrinsic generalizations of the properties in <cit.> for depth functions acting on vectors in a Euclidean space ℝ^d. We also consider integrated analogs for depth functions acting on curves of HPD matrices y(t) ∈ℳ with t ∈ℐ⊂ℝ, such as spectral density matrices in the Fourier domain.§.§ Depth properties Below, we denote D(ν, y) for the depth of a matrix y ∈ℳ with respect to a distribution ν∈ P(ℳ); or iD(ν, y) for the integrated depth of a matrix curve y := (y(t))_t ∈ℐ with respect to a curve of marginal measures ν := (ν(t))_t ∈ℐ, such that ν(t) ∈ P(ℳ) for each t ∈ℐ. If a nonnegative bounded function D(·, ·) or iD(·, ·) satisfies the pointwise (resp. integrated) properties P.1 to P.4, we say that it is a proper data depth function on the Riemannian manifold (ℳ, g_R).P.1 (Congruence invariance) The depth function should be invariant under matrix congruence transformation of the form x ↦ a ∗ x, with a ∈GL(d, ℂ). That is, for each a ∈GL(d, ℂ),D(ν, y)= D(ν_a, a ∗ y), ∀y ∈ℳ, where ν_a is the distribution of the transformed random variable a ∗ X, such that X is distributed according to ν. Generalizing this property for an integrated depth function iD(ν, y), we require that the same property holds pointwise for each t ∈ℐ. In this case, a := (a(t))_t ∈ℐ is a curve of invertible matrices, with a(t) ∈GL(d, ℂ) for each t ∈ℐ.In a standard Euclidean context, for a depth function acting on vectors in the Euclidean space ℝ^d, it is desirable that the depth is affine-invariant D(ν, y) = D(ν_a,b, ay + b) for each y ∈ℝ^d, where ν_a,b is the distribution of the random vector aX + b, with a ∈GL(d, ℝ), b ∈ℝ^d and X distributed according to ν. In the current setup, we are concerned with covariance or correlation matrices, corresponding to the second-order behavior of a random vector. For a random vector X with covariance matrix Σ, the covariance matrix of the affine transformation aX + b is given by a^T ∗Σ = a Σ a^T. A natural requirement for the depth functions acting on symmetric or Hermitian PD matrices is therefore invariance under congruence transformations of the data. Another way to view this is that a depth function acting on the covariance matrix of a data vector X should be invariant under a change of basis in the data space of X. P.2 (Maximality at center) The depth function should attain its maximum value, i.e., deepest point, at a well-defined unique center of the distribution, such as the intrinsic mean or median, which are characterized as the points of central and angular symmetry respectively. Let μ∈ℳ be a unique central point of the distribution ν, then,D(ν, μ)= sup_y ∈ℳD(ν, y).Similarly, for an integrated depth function, the maximum value should be attained at a well-definedunique central curve μ(t) with t ∈ℐ, such as the curve of intrinsic means or medians.P.3 (Monotonicity relative to center) As y ∈ℳ moves away from the deepest point μ along a geodesic curve emanating from μ, the depth of the point y with respect to the distribution ν should be monotonically non-increasing. Let _μ(th), t ≥ 0, be the geodesic emanating from μ with unit tangent vector h. Then,D(ν, _μ(t_1 h))≥ D(ν, _μ(t_2 h)), ∀0 ≤ t_1 ≤ t_2.For an integrated depth function, let s_1(t), s_2(t) be real-valued curves over ℐ, such that 0 ≤ s_1(t) ≤ s_2(t) for each t ∈ℐ. Denote y_1(t) := _μ(t)(s_1(t) h(t)) and y_2(t) := _μ(t)(s_2(t) h(t)), where h(t) ∈ T_μ(t)(ℳ) is a curve of unit tangent vectors. Then, iD(ν, y_1)≥ iD(ν, y_2).P.4 (Vanishing at infinity) The depth of a point y ∈ℳ should approach zero as the point y converges to a singular matrix, i.e., a matrix with zero or infinite eigenvalues, lim_M →∞sup_‖(y) ‖_F ≥ MD(ν, y)=0.Similarly, for an integrated depth function, if the curve y(t) converges to a curve of singular matrices for each t ∈ℐ, then the integrated depth should approach zero. Below, we give two additional continuity properties, which although not strictly required are nonetheless useful to derive asymptotic results in subsequent applications, such as rank-based hypothesis testing or the construction of depth-based confidence sets as in Section <ref>. (P.5) (Continuity in y) Let (y_n)_n ∈ℕ be a convergent sequence with y_n ∈ℳ for each n ∈ℕ, such that δ_R(y_n, y) → 0. Then the depth function is continuous in y in the sense that,lim_n →∞D(ν, y_n)= D(ν, y).(P.6) (Uniform continuity in ν) The depth function is uniformly continuous in terms of the probability measure ν in the sense that if (ν_n)_n ∈ℕ is a uniformly integrable sequence of probability measures, such that ν_n w→ν. Then, sup_y ∈ℳ |D(ν_n, y) - D(ν, y)|→0, asn →∞. §.§ Intrinsic zonoid depth As geodesic convex hulls are well-defined on the Riemannian manifold (ℳ, g_R), there exist natural manifold generalizations of the simplicial depth or convex hull peeling depth (<cit.>) for Euclidean vectors. However, the simplicial depth requires the computation of possibly many convex hulls, which quickly becomes computationally infeasible, especially for higher-dimensional matrices. Instead, we propose a straightforward manifold generalization of another depth measure based on trimmed convex depth regions, the zonoid depth (e.g., <cit.>). The intrinsic manifold zonoid depth can be computed with the same tools as the standard zonoid depth for Euclidean vectors and its computation remains efficient, also for higher-dimensional HPD matrices.In a Euclidean context, let ζ be a probability measure on (ℝ^d, ℬ^d) with finite first moment, then the zonoid α-trimmed region, with 0 < α≤ 1, is defined as the set,D_α(ζ):= {∫_ℝ^d x w(x) dζ(x) |w:ℝ^d →[0, 1/α] measurable, s.t. ∫_ℝ^d w(x) dζ(x) = 1 }.If α = 0, we set D_0(ζ) = ℝ^d. By <cit.>, D_α(ζ) is convex and monotone decreasing in α, creating a nested sequence of convex sets for decreasing values α_1 ≥…≥α_n. If α = 1, D_α(ζ) consists of the single point E_ζ[X], the Euclidean mean of the distribution ζ. The Euclidean zonoid depth of a point y ∈ℝ^d with respect to a distribution ζ is characterized by the smallest α-trimmed region still containing y,ZD_ℝ^d(ζ, y):= sup{α : y ∈ D_α(ζ) }.The zonoid data depth is extended to the Riemannian manifold as follows.(Intrinsic zonoid depth)Let ν∈ P_2(ℳ) and let ζ_y be the probability measure on (ℝ^d^2, ℬ(ℝ^d^2)) of the random variable _y(X) ∈ T_y(ℳ) ≅ℝ^d^2 as a d^2-dimensional random real basis component vector, where X has probability measure ν. The intrinsic zonoid depth of a point y ∈ℳ with respect to the distribution ν is defined as:ZD_ℳ(ν, y):= sup{α : 0⃗∈ D_α(ζ_y) },where 0⃗ is a d^2-dimensional zero vector, and D_α(ζ_y) is the Euclidean zonoid α-trimmed region of the distribution of the normal coordinate vector ζ_y on (ℝ^d^2, ℬ(ℝ^d^2)). Equivalently, the intrinsic zonoid depth can be written as,ZD_ℳ(ν, y)= sup{α : y ∈ D^ℳ_α(ν) },where D^ℳ_α(ν) is the intrinsic zonoid α-trimmed region defined as,D_α^ℳ(ν) = { y ∈ℳ |y = _y(∫_ℳ_y(x) w(x) ν(dx) ), w:ℳ→ [0, 1/α],∫_ℳ w(x) ν(dx) = 1 }, with w a measurable function.Computation of the intrinsic zonoid depth is straightforward via the definitionZD_ℳ(ν, y) = ZD_ℝ^d^2(ζ_y, 0) and can be calculated directly by the Euclidean zonoid depth as in <cit.>. Note that if (e_1,…,e_d^2) is an orthonormal basis of the vector space (ℋ, ⟨·, ·⟩_F), then an orthonormal basis of (T_y(ℳ), ⟨·, ·⟩_y) is simply (y^1/2∗ e_1,…, y^1/2∗ e_d^2). In fact, the basis components of _y(x) ∈ T_y(ℳ) can be computed directly using only an orthonormal basis of (ℋ, ⟨·, ·⟩_F), since ⟨_y(x), y^1/2∗ e_i ⟩_y = ⟨(y^-1/2∗ x), e_i ⟩_F.The intrinsic zonoid depth is a proper data depth function in the sense of Section <ref>, satisfying properties P.1–P.4 for distributions in P_2(ℳ). The unique point of maximum depth coincides with the intrinsic mean of the distribution.In order to show that the continuity properties P.5 and P.6 also hold for the intrinsic zonoid depth, we need the following lemma.Let ν∈ P_2(ℳ). Then, ⋃_0 < α≤ 1 D_α^ℳ(ν) = conv(ν). In particular, for each y ∈conv(ν), ZD_ℳ(ν, y) > 0 by definition of the intrinsic zonoid depth.The intrinsic zonoid depth is continuous in y as in P.5 for y ∈conv(ν) and ν∈ P_2(ℳ), i.e., if δ_R(y_n, y) → 0 with y_n ∈ℳ for all n ∈ℕ, then,lim_n →∞ZD_ℳ(ν, y_n)= ZD_ℳ(ν, y).The intrinsic zonoid depth is uniformly continuous in ν as in P.6 for y ∈rint(conv(ν)) and (ν_n)_n ∈ℕ in P_2(ℳ) uniformly integrable. If ν_n w→ν, then,sup_y ∈rint(conv(ν)) | ZD_ℳ(ν_n, y) - ZD_ℳ(ν, y)|→0, asn →∞.In Figure <ref>, we display several 100(1-α)% central intrinsic zonoid depth regions for generated i.i.d. samples of (2 × 2)-dimensional SPD matrices x_1,…,x_500 from a distribution ν_μ∈ P_2(ℳ) with intrinsic mean μ. Denoting ν_500 for the empirical distribution of x_1,…,x_500, the 100(1-α)% central depth-region DR_1-α is given by the set of SPD matrices: DR_1-α = { y ∈Re(ℙ_2 × 2) :D(ν_500, y) ≥β_*, β_* := min_β∈ (0,1)[ 1/500∑_i=1^5001_{ D(ν_500, x_i) ≥β}≥ 1-α] },In the left-hand image, data matrices are sampled from a Riemannian log-normal distribution ν_μ as in e.g., <cit.>, with intrinsic mean μ equal to the identity matrix. That is, X_i d=(∑_k Z_kie^k), with (Z_ki)_k iid∼ N(0, 1/2), where (e^1,…, e^4) ∈ℍ_2 × 2^4 is an orthonormal basis of (ℍ_2 × 2, ⟨·, ·⟩_F). In the right-hand image, ν_μ is a rescaled Wishart distribution with intrinsic mean μ = ([0.5 0.25; 0.250.5 ]), such that X_i d= e^-c(2,8) W, with W ∼ W_2^c(8,μ/8) a complex Wishart distribution with 8 degrees of freedom and c(d,B) = -log(B) + 1/d∑_i=1^d ψ(B - (d - i)) the intrinsic bias-correction in <cit.>. The (x,y,z)-axes in Figure <ref> correspond to the three independent components in the symmetric matrix ( [ x z; z y ]). §.§ Integrated intrinsic zonoid depth A straightforward generalization of the pointwise intrinsic zonoid depth in Definition <ref> to compute the depth of a curve y(t) ∈ℳ with respect to a collection of marginal measures ν(t) for t ∈ℐ⊂ℝ is to consider the integrated intrinsic zonoid depth given by,iZD_ℳ(ν, y):=∫_ℐZD_ℳ(ν(t), y(t))dt= ∫_ℐsup{α : 0_d × d∈ D_α(ζ_y(t)) }dt,where ζ_y(t) is the probability of the components of the random variable _y(t)(X(t)) ∈ T_y(t)(ℳ) ≅ℝ^d^2, such that X(t) has probability measure ν(t). This is similar to the construction of the modified band depth (MBD) in a functional data context, where the pointwise Euclidean simplicial depths y(t) are integrated over a functional domain t ∈ℐ, (<cit.> or <cit.>). The integrated versions of the properties P.1 to P.6 continue to hold for the integrated intrinsic zonoid depth and are straightforward generalizations of their pointwise analogs. The integrated intrinsic zonoid depth is a proper integrated depth function in the sense of Section <ref>, satisfying the integrated versions of properties P.1–P.4 for collections of marginal distributions ν(t) ∈ P_2(ℳ) for t ∈ℐ. The unique curve of maximum depth coincides with the curve of pointwise intrinsic means of the marginal distributions.Let y(t) ∈conv(ν(t)), ν(t) ∈ P_2(ℳ) and y_n(t) ∈ℳ for each t ∈ℐ, such that y_n(t) → y(t) uniformly in t, i.e., sup_t ∈ℐδ_R(y_n(t), y(t)) → 0. Then the integrated manifold zonoid depth is continuous in y as in P.5 in the sense that,lim_n →∞iZD_ℳ(ν, y_n)= iZD_ℳ(ν, y). If y(t) ∈rint(conv(ν)), (ν_n(t))_n ∈ℕ in P_2(ℳ) is a uniformly integrable sequence of measures uniform in t, and ν_n(t) w→ν(t) uniformly in t. Then, sup_y ∈rint(conv(ν)) | iZD_ℳ(ν_n, y) - iZD_ℳ(ν, y)|→0, asn →∞.Here, y ∈rint(conv(ν)) means that y(t) ∈rint(conv(ν(t))) for each t ∈ℐ, and the uniform weak convergence ν_n(t) w→ν(t) is read as sup_t ∈ℐ |𝔼_ν_n(t)[ϕ(X)] - 𝔼_ν(t)[ϕ(X)]| → 0 for every continuous and bounded function ϕ: ℳ→ℝ.§.§ Geodesic distance depth As a second notion of data depth on the geodesically complete manifold (ℳ, g_R), we consider the geodesic distance depth, the natural analog on the metric space (ℳ, δ_R) of the arc distance depth in <cit.> for data observations on circles and spheres. The geodesic distance depth is straightforward to calculate, also for high-dimensional matrices, as the only required operation is the computation of Riemannian distances between HPD matrices. (Geodesic distance depth)Let ν∈ P_1(ℳ), then the geodesic distance depth of a point y ∈ℳ with respect to the distribution ν is defined as:GDD(ν, y)= exp( - ∫_ℳδ_R(y, x) ν(dx) ). The geodesic distance depth is a proper data depth function in the sense of Section <ref>, satisfying P.1–P.4 for distributions in P_1(ℳ). The unique point of maximum depth coincides with the intrinsic median of the distribution.The geodesic distance depth is continuous in y as in P.5 for y ∈cl(ℳ), the closure of ℳ, and ν∈ P_1(ℳ). That is, if δ_R(y_n, y) → 0 with y_n ∈ℳ for all n ∈ℕ, then,lim_n →∞GDD(ν, y_n)= GDD(ν, y).The geodesic distance depth is uniformly continuous in ν as in P.6 for y ∈ℳ and (ν_n)_n ∈ℕ uniformly integrable. If ν_n w→ν, then, sup_y ∈ℳ | GDD(ν_n, y) - GDD(ν, y)|→0, asn →∞. In order to compute the empirical depth GDD(ν_n, y) of each observation in a sample y ∈{x_1,…,x_n} with respect to the empirical distribution ν_n of the sample {x_1,…,x_n}, it suffices to compute the (n × n)-dimensional distance matrix with (i,j)-th entry δ_R(x_i, x_j). This matrix is fully determined by n(n-1)/2 components, as the diagonal entries are zero and δ_R(x_i, x_j) = δ_R(x_j, x_i). In particular, in online applications where the depths need to be updated each time a new observation enters the database, we simply add one extra column and row to the distance matrix and update the depth values. A third notion of data depth on the Riemannian manifold (ℳ, g_R), closely related to the geodesic distance depth, is the intrinsic spatial depth. This is the natural manifold generalization of the spatial depth in <cit.> or <cit.>. For a distribution ν∈ P_1(ℳ) and a point y ∈ℳ, the intrinsic spatial depth is given by:SD(ν, y)= 1 - ‖∫_ℳ_y(x)/δ_R(y,x) ν(dx) ‖_y =1 - ‖∫_ℳ(y^-1/2∗ x)/δ_R(y,x) ν(dx) ‖_F.The intrinsic spatial depth attains its maximum value SD(ν, m) = 1 at the intrinsic median, since E_ν[ _m(x)/δ_R(m,x)] = 0 by definition of the intrinsic median, and the depth is lower bounded by zero, which is a direct consequence of the triangle inequality combined with the fact that ‖_y(x) ‖_y = δ_R(y, x). The intrinsic spatial depth is closely associated to the geodesic distance depth in the sense that it is based on the gradient of the distance function, i.e., the gradient of f_x(y) = δ_R(y,x) for fixed x is given by grad f_x(y) = _y(x)/δ_R(y,x), see <cit.>. §.§ Integrated geodesic distance depthIn order to generalize the pointwise geodesic distance depth to the depth of a curve y(t) ∈ℳ, with respect to a collection of marginal measures ν_t = ν(t) for t ∈ℐ⊂ℝ, we replace the pointwise expected distance in Definition <ref> by an integrated expected distance as:iGDD(ν, y)= exp( - ∫_ℐ∫_ℳδ_R(y(t), x) ν_t(dx) dt ).The integrated versions of the properties P.1 to P.6 continue to hold for the integrated geodesic distance depth and are straightforward generalizations of their pointwise analogs as in the case of the integrated intrinsic zonoid depth.The integrated geodesic distance depth is a proper function depth function in the sense of Section <ref>, satisfying the integrated versions of properties P.1–P.4 for collections of marginal distribution ν(t) ∈ P_1(ℳ) for t ∈ℐ. The unique curve of maximum depth coincides with the curve of pointwise intrinsic medians of the marginal distributions.Let y(t) ∈cl(ℳ) and ν(t) ∈ P_1(ℳ) for each t ∈ℐ, such that y_n(t) → y(t) uniformly in t, i.e., sup_t ∈ℐδ_R(y_n(t), y(t)) → 0. Then the integrated geodesic distance depth is continuous in y as in P.5 in the sense that, lim_n →∞iGDD(ν, y_n)= iGDD(ν, y).If y(t) ∈ℳ, (ν_n(t))_n ∈ℕ in P_1(ℳ) is a uniformly integrable sequence of measures uniform in t, and ν_n(t) w→ν(t) uniformly in t. Then,sup_y ∈ℳ | iGDD(ν_n, y) - iGDD(ν, y)|→0, asn →∞,where y ∈ℳ is read as y(t) ∈ℳ for each t ∈ℐ. § ASPECTS OF ROBUSTNESS AND EFFICIENCY Depth-median breakdown. An intuitive measure of robustness of the intrinsic depth functions is given by their breakdown points according to <cit.>. In order to assess the robustness of the depth functions, a first step is to compute the breakdown point of the location estimator that maximizes the depth, i.e., the depth-median, as in <cit.> or <cit.>, which we explain as follows. Let X^(n) = {x_1,…,x_n}∈ℳ^n be an initial set of HPD observations and let Y^(m) = { y_1,…,y_m}∈ℳ^m be a set of contaminating HPD observations. Denote Z^(n,m) = X^(n)∪ Y^(m) and consider the –not necessarily in-sample– depth-median T_D(Z^(n,m)) = min_y ∈ℳ D(y, ν_n,m), with ν_n,m the empirical distribution of Z^(n,m). The breakdown point of the depth-median is the smallest fraction of arbitrarily large contaminating observations that breaks down the estimator:ϵ_1(X)= {m/m + n : sup_Y^(m)‖(T_D(Z^(n,m))) ‖_F = ∞}.Note that ‖(x) ‖_F = δ_R(x, Id), such that ‖(x) ‖_F < ∞ for all x ∈ℳ, and ‖(x) ‖_F = ∞ if x is a singular matrix lying on the boundary of the metric space (ℳ, δ_R). The breakdown point of the depth-median for the intrinsic zonoid depth is ϵ_1(X) = 1/(n+1) as the depth-median coincides with the sample intrinsic mean and it requires only a single large contaminating observation to make the sample intrinsic mean arbitrarily large. The intrinsic zonoid depth-median is therefore not robust against outlying observations in terms of the depth-median breakdown point, analogous to the Euclidean case, as discussed in <cit.>. For the geodesic distance depth, the depth-median coincides with the intrinsic median and the intrinsic median in a geodesically complete manifold is known to have maximum breakdown point ϵ_1(X) = 1/2, as shown in <cit.>. Simultaneous depth-rank breakdown. The above definition of the breakdown point gives us an intuitive measure of robustness for the depth-median. However, it does not tell us how robust the depth function is with respect to the depth-ranked observations in the sample itself. As a more informative robustness measure, we study the breakdown point simultaneously over all the depth-ranked observations in an initial sample of size n. Let us write z^(n,m)_[i] for the i-th center-to-outward order statistic (or i-th depth-ranked observation) with respect to a given depth measure. The simultaneous breakdown point is the smallest fraction of arbitrarily large contaminating observations that breaks down at least one of the first n depth-ranked observations:ϵ_2(X)= {m/n + m : max_i=1,…,nsup_Y^(m)‖(z_[i]^(n,m)) ‖_F = ∞}.For the intrinsic zonoid depth, if we break ties by assigning the same rank to each observation with equal depth, the simultaneous breakdown point is ϵ_2(X) = 1/(n+1). If we break ties by assigning increasing ranks based on increasing Frobenius norms ‖(z_i^(n,m)) ‖_F, then the simultaneous breakdown point is ϵ_2(X) = 2/(n+2). This is illustrated as follows. Let y_1 be a first contaminating observation with ‖(y_1)‖_F > N_M, such that ‖(Z^(n,1))‖_F > M, where Z^(n,1) denotes the intrinsic mean of the contaminated sample Z^(n,1). Assuming without loss of generality that ‖(x_i) ‖_F ≪ N_M for each i=1,…,n, the contaminating observation y_1 will be assigned depth-rank n+1 and the first n depth-ranked observations do not break down. Let y_2 = Z^(n,1) be a second contaminating observation, then y_2 has maximum depth by Theorem <ref>, and thus z^(n,2)_[1] = y_2. Since we can choose N_M > 0, such that ‖(y_2) ‖_F > M for any M > 0, it follows that ϵ_2(X) = 2/(n+2).For the geodesic distance depth, the depth-ranked observations have maximum simultaneous breakdown point ϵ_2(X) = 1/2 equal to the median breakdown point ϵ_1(X).The above proposition asserts that if we observe a number of (large) contaminating observations m smaller than the initial sample size n, the geodesic distance depth will assign the contaminating observations to the ranks n+1,…,n+m. The depth-rankings with respect to the geodesic distance depth are therefore highly robust against arbitrarily large contaminating observations, in contrast to the intrinsic zonoid depth-rankings, also illustrated in Figure <ref>. The above depth measures share the same robustness properties in terms of their depth-median and simultaneous depth-rank breakdown point. In general, this does not have to be the case. For instance, consider the simplicial or convex hull peeling depth on the real line (e.g., <cit.>), which are highly robust in terms of their depth-median breakdown point ϵ_1(X) = 1/2, as argued in <cit.> for the simplicial depth. In contrast, both data depths have simultaneous breakdown points ϵ_2(X) ≤ 2/(n+2) as two well-placed large contaminating observations y_1,y_2 ∈ℝ can ensure that ‖ z_[n]^(n,m)‖ > M for any M > 0.The definitions of the depth-median and simultaneous breakdown points for the integrated depth functions are straightforward generalizations of the pointwise definitions above and it is easily verified that the breakdown points for the integrated depth functions coincide with their pointwise analogs.Depth-median efficiency. The robustness of the depth functions may result in a loss of efficiency of the depth-median as an intrinsic location estimator on the Riemannian manifold (ℳ, g_R). Figure <ref> displays the relative efficiency of the geodesic distance depth-median μ̂_GDD, (i.e., the intrinsic median), relative to the intrinsic zonoid depth-median μ̂_ZD, (i.e., the intrinsic mean), in terms of the Riemannian mean squared error. That is,RE(μ̂_ZD, μ̂_GDD)= E_ν[δ_R(μ̂_GDD(X), μ)^2]/E_ν[δ_R(μ̂_ZD(X), μ)^2],The depth-medians are computed from simulated samples X = X_1,…,X_n iid∼ν^p_Id, where ν^p_Id∈ P_2(ℳ) is a centrally symmetric distribution, such that the intrinsic mean and median coincide and are equal to the identity matrix. In particular, X_i d=( ∑_k Z_k e_k ), where (e^1,…, e^d^2) ∈ℍ_d × d^d^2 is an orthonormal basis of (ℍ_d × d, ⟨·, ·⟩_F), and (Z_k)_k are i.i.d. random variables from a p-generalized normal distribution (<cit.>), with mean zero and standard deviation σ_p = p^1/p√(Γ(3/p)/Γ(1/p)), such that σ_2 = 1. The family of p-generalized normal distributions (p-GNDs) allows us to generate tail behavior that is either heavier (p < 2) or lighter (p > 2) than the normal distribution. For p=2, the p-GND coincides with the normal distribution. As shown in Figure <ref>, for random variables generated from a light-tailed p-GND (p=5 and p=2 and in particular small dimensions d), the intrinsic zonoid depth regions are better centered around the population mean of the generating distributions than the geodesic distance depth regions; for a heavier-tailed p-GND (p = 1.5), the efficiency gain of the intrinsic zonoid depth-median relative to the geodesic distance depth-median diminishes.§.§ Computational effortTo demonstrate the computational effort of the depth calculations in practice, Figure <ref> displays median computation times in milliseconds (single-core Intel Xeon E5-2650, 2.40Ghz) of the intrinsic depths of a single (d × d)-dimensional HPD matrix with respect to a sample of n HPD matrices calculated with the functionin the accompanying R-package , (including the intrinsic spatial depth computation times). On the left, the sample size is fixed at n = 500, and on the right the matrix-dimensions are fixed at d = 6. The displayed times are the median computation times of 100 depth calculations for 100 simulated samples, i.e., a total of 10^4 depth calculations per scenario. The intrinsic zonoid depth requires that d^2 < n and for this reason there are several missing values in the left-hand image. Changing the default affine-invariant metric in the intrinsic depth computations to e.g., the Log-Euclidean, Cholesky, root-Euclidean or Euclidean metric –all are available in the function – the depth computation times are either similar or faster than the times displayed in Figure <ref>.§ APPLICATION: CONFIDENCE SETS FOR HPD MATRICES As an illustrating application of the intrinsic depth functions, we construct intrinsic matrix confidence regions in the space of HPD or SPD matrices, such as confidence regions for estimated covariance or spectral density matrices. In the context of spectral density matrix estimation, a common approach is to construct asymptotic or bootstrapped confidence regions individually for each element of the spectral matrix, as demonstrated in <cit.> or <cit.> among others. Although this is a suitable approach to assess the variability of the estimator in each of the individual matrix components, this does not allow for the construction of simultaneous confidence regions across matrix elements, as the combined elementwise confidence intervals do not take the positive definite constraints of the full matrix object into account. In contrast, the intrinsic depth regions provide a natural way to construct joint matrix confidence regions taking into account the geometric constraints of the target space. This is illustrated by the construction of depth-based confidence regions for the intrinsic mean of a sample of i.i.d. HPD random matrices.Consider X_1,…,X_n iid∼ν_μ, with ν_μ∈ P_2(ℳ) centered around a population intrinsic mean μ∈ℳ. Denote m̅ for the sample intrinsic mean, i.e., m̅ := min_y ∑_i=1^n δ_R(y, X_i)^2, then the intrinsic central limit theorem in <cit.> tells us that,√(n) _μ(m̅)d→Z,asn →∞,where Z is a random Hermitian matrix, such that Z d=∑_i z_i e^i, with (z_1,…,z_d^2)' ∼ N_d^2(0, Λ) and (e^1,…,e^d^2) an orthonormal basis of T_μ(ℳ) equipped with the associated metric ⟨·, ·⟩_μ.To cast this into a standard Euclidean framework, the Euclidean logarithmic map is given by _μ(m̅) = m̅ - μ. If √(n)(m̅ - μ) = Z for some fixed matrices m̅, μ, Z, then μ = m̅ - 1/√(n) Z, and in the random setting, the construction of asymptotic confidence sets for μ is straightforward based on an estimate m̅ and knowledge of the limiting distribution of Z. In a curved Riemannian manifold, if √(n)_μ(m̅) = Z, with m̅,μ,Z fixed, then in general μ≠_m̅(-1/√(n) Z). Instead, μ = _m̅(-1/√(n)Z_μ), where Z_μ is the parallel transport of the matrix Z from the tangent space T_μ(ℳ) at μ to the tangent space T_m̅(Z) at m̅. In the Euclidean setting Z_μ = Z, as the parallel transport in a Euclidean or flat space equals the identity map, but on the Riemannian manifold (ℳ, g_R) the parallel transport is nontrivial due to the nonzero curvature of the space and it depends on the unknown population mean μ. One working solution is to approximate the parallel transport using a plug-in estimator for μ, such as m̅, in which case the parallel transport is approximated by the identity map. Another approach that is considered here, is to construct approximate confidence sets for the intrinsic mean through resampling, which does not require knowledge of the population mean μ. That is, (i) generate bootstrap intrinsic sample means m̅^*_1,…, m̅^*_B by resampling with replacement from X_1,…,X_n, (ii) define a percentile 100(1-α)% confidence region for μ in the same fashion as <cit.> or <cit.> through the trimmed depth-region: CR_1-α(X)= {θ∈ℳ:D(θ, ν̅^*_B) ≥β_*, β_* := min_β∈ (0,1)[ 1/B∑_b=1^B 1{ D(m̅^*_b, ν̅^*_B) ≥β}≥ 1-α]}, where ν̅^*_B is the empirical distribution of m̅_1^*,…,m̅^*_B. First-order convergence of the percentile confidence regions to the asymptotically correct confidence regions, as n and B tend to infinity, follows in the same fashion as in <cit.>. The proof relies on the uniform continuity property P.6, satisfied by both the intrinsic zonoid and geodesic distance depth.Note that the depth-based confidence regions are equivariant under matrix congruence transformations of the sample a ∗X = {a ∗ X_1,…, a ∗ X_n }, with a ∈GL(d, ℂ), in the sense that CR_1-α(a ∗X) = { a ∗ x :x ∈CR_1-α(X) }. This is an immediate consequence of property P.1 and the fact that the intrinsic mean is general linear congruence equivariant, i.e., 𝔼_ν[a ∗ X] = a ∗𝔼_ν[X].Table <ref> displays the empirical coverage of the percentile bootstrap confidence regions for simulated samples X_1,…,X_n iid∼ν^p_Id, with ν^p_Id∈ P_2(ℳ) a centrally symmetric distribution around the identity matrix simulated from a p-generalized normal distribution (p-GND) equivalent to the data generating processes in Figure <ref>. The column Ave.-β_* displays the average lower depth confidence bounds, using the notation for β_* as in eq.(<ref>). The column Ave.-Size displays the distance of the center of the confidence ball to the furthest boundary, i.e., max_{i : D(m̅_i^*, ν̅_B^*) ≥β_* }δ_R(m̅, m̅_i^*), averaged across simulations, and the coverage is the proportion of times the identity matrix has a depth value larger or equal to the lower depth bound β_*. § ANALYSIS OF MULTICENTER CLINICAL TRIAL DATAThe intrinsic data depth functions provide a fast and intuitive procedure to explore samples of covariance matrices by identifying most central or most outlying covariance matrices, based on the Riemannian geometry of the space. This is illustrated by the exploratory analysis of a collection of sample covariance matrices obtained from 246 clinical centers (-), which have been anonymized for reasons of confidentiality. For each clinical center, medical analysts have recorded the height (), weight (), systolic blood pressure () and diastolic blood pressure () for a number of clinical center patients. As part of a broader analysis, we explore the variability among clinical centers in terms of the second-order behavior, i.e., the variance-covariance structure, of the measured variables. On the one hand, we wish to identify outlying clinical centers to be flagged for further inspection or removal in subsequent data analysis. On the other hand, we are interested in the average or mean behavior of the sample covariance matrices across clinical centers.Addressing the first objective, the left image in Figure <ref> displays the 15 most central depth-ranked clinical centers (from left to right, with most central clinic ) based on the geodesic distance depth applied to the collection of 246 (4 × 4)-dimensional symmetric positive definite covariance matrices. The bottom rows display the six symmetric cross-correlations , , , ,and . In addition, the top rows display the four variances , ,and , providing information about the scale of the covariance matrices. The right image in Figure <ref> displays the 15 most outlying depth-ranked clinical centers (from right to left, with most outlying clinic ) based on the geodesic distance depth in the same fashion. The center-to-outward orderings obtained via the intrinsic zonoid depth are comparable and can be found in the supplementary material. We point out that the data depth functions capture clinical centers that are outlying primarily in terms of the correlation- or covariance-structure, (e.g., center ), primarily in terms of the variance-structure, (e.g., center ), or both, (e.g., center ). Regarding the second objective, to assess the average behavior across covariance matrices, we display in Figure <ref> the intrinsic sample mean of the set of 246 sample covariance matrices across clinical centers, including a 95-% intrinsic geodesic distance depth percentile bootstrap confidence region. Here, the left-hand image displays the four variances and the right-hand image displays the six cross-correlations analogous to the decomposition in Figure <ref>. The grey confidence region displays the bootstrapped sample means contained in the confidence region CR_0.95(X). In particular, a covariance matrix y ∈ℙ_4 × 4 is included in the confidence region CR_0.95(X) if and only if GDD(y, ν̅^*_B) ≥β_*, where ν̅^*_B is the empirical distribution of the bootstrapped sample means and β_* denotes the lower depth-bound as in Section <ref>.§ CONCLUDING REMARKSIn this paper, we studied intrinsic data depth measures acting on the Riemannian manifold of symmetric or Hermitian PD matrices. The primary focus of this work is on the Riemannian manifold equipped with the affine-invariant metric, as this is the only metric that is invariant under congruence transformation of the data as described in property P.1 in Section <ref>. However, the construction of the depth functions does not fundamentally rely on the affine-invariant metric and the equivalent notions of properties P.2 to P.6 are expected to hold for any Riemannian metric that constitutes a geodesically complete manifold, such as the Log-Euclidean metric as discussed in <cit.> among others. For each of the proposed intrinsic depth functions, (including the intrinsic spatial depth), the sample data depth values are straightforward to compute and remain computationally efficient also for relatively high-dimensional matrices, with implementations directly available in the R-package , <cit.>. As such, the data depths serve as an easy-to-use data exploration tool, but also provide a practical framework for inference in the context of random samples of HPD matrices, as illustrated in Section <ref> through the construction of depth-based confidence regions. Additional material available in the packageincludes implementations of several intrinsic rank-based hypothesis tests, replacing the ordinary ranks by the depth-induced ranks analogous to <cit.>, <cit.>, or <cit.> for samples of Euclidean vectors. Another interesting application of the intrinsic data depth is depth-based classification or clustering for groups or samples of covariance matrices analogous to e.g., <cit.>. To conclude, Hermitian or symmetric positive definite matrices play an important role in many different fields of statistical research, see <cit.>, and it is of interest to apply the intrinsic data depths in other contexts than demonstrated in this paper. For instance, applied to diffusion tensor imaging, the depth functions show potential for fast detection of anomalies or artifacts in large collections of SPD diffusion tensors.§ ACKNOWLEDGEMENTSThe authors gratefully acknowledge the financial support from the following agencies and projects: the Belgian Fund for Scientific Research FRIA/FRS-FNRS (J. Chau), the contract ‘Projet d’Actions de Recherche Concertées’ (ARC) No. 12/17-045 of the ‘Communauté française de Belgique’ (J. Chau and R. von Sachs), IAP research network P7/06 of the Belgian government (R. von Sachs), the US National Science Foundation and KAUST (H. Ombao). We thank Lieven Desmet and the SMCS/UCL for providing access to the anonymized clinical trial data. Computational resources have been provided by the supercomputing facilities of the CISM/UCL and the CÉCI funded by the FRS-FNRS under convention 2.5020.11. § APPENDIX I: PROOFS §.§ Proof of Theorem <ref> P.1 This is a direct consequence of the claim that the following two events are equivalent:{0_d × d∈ D_α(ζ_y) } ⇔ {0_d × d∈ D_α(ζ_a, y) },0 ≤α≤ 1,with ζ_y the probability measure of _y(X) and ζ_a,y the probability measure of _a ∗ y(a ∗ X), where X has probability measure ν. Here, the Euclidean zonoid trimmed region D_α(ζ_y) is represented as a set of (d × d)-dimensional Hermitian matrices, instead of an equivalent set of d^2-dimensional real basis component vectors, as in Section <ref>, and 0_d × d is the zero matrix. For α = 0, the equivalence in eq.(<ref>) is true by definition, since D_0(ζ_y) = D_0(ζ_a,y) = ℝ^d × d.Suppose that 0_d × d∈ D_α(ζ_y) for some 0 < α≤ 1. Noting that T_y(ℳ) can be identified by the real vector space of Hermitian matrices ℋ for each y ∈ℳ, by definition of the zonoid α-trimmed region, there exists a measurable function g̃ : ℋ→ [0, 1/α], such that,∫_ℋg̃(z) ζ_y(dz) = 1, ∫_ℋ z g̃(z) ζ_y(dz) = 0_d × d.It is straightforward to verify that for each a ∈GL(d, ℂ) and x,y ∈ℳ, _a ∗ y(a ∗ X) = a ∗_y(x). Define g(z) = g̃(a^-1∗ z), then g : ℋ→ [0, 1/α] is a measurable function such that,∫_ℋ g(z) ζ_a,y(dz) = ∫_ℋ g(a ∗ z) ζ_y(dz) = ∫_ℋg̃(z) ζ_y(dz) = 1,and,∫_ℋ z g(z) ζ_a,y(dz)= ∫_ℋ (a ∗ z) g(a ∗ z) ζ_y(dz)= ∫_ℋ (a ∗ z) g̃(z) ζ_y(dz)=a ∗( ∫_ℋ z g̃(z) ζ_y(dz) )=a ∗0_d × d = 0_d × d.Therefore 0_d × d∈ D_α(ζ_a,y). The other direction follows by a similar argument, using that a ≠0_d × d.P.2 The zonoid trimmed region D_1(ζ_y) contains the single point E_ν[_y(X)] by construction. The deepest point y ∈ℳ is therefore characterized by the point that satisfies E_ν[_y(X)] = 0_d × d. By eq.(<ref>) in the main document, on the Riemannian manifold ℳ with ν∈ P_2(ℳ), this point is the uniquely existing geometric expectation of the distribution ν.P.3 Using the equivalent definition ZD_ℳ(ν, y) = sup{α : y ∈ D_α^ℳ(ν) }, by construction D_α^ℳ(ν) is a geodesically convex set that contains the geometric mean μ := 𝔼_ν[X] for each α∈ [0,1]. Also, D_α_1^ℳ(ν) ⊆ D_α_2^ℳ(ν) for each 1 ≥α_1 ≥α_2 ≥ 0. Combining the above arguments, it follows that a geodesic curve _μ(th), with t ≥ 0 increasing, has monotone non-increasing depth as it moves further away from the center μ.P.4 With the same notation as above, for α∈ (0,1] we claim that the sets D_α^ℳ(ν) are closed and bounded, and therefore also compact by the Hopf-Rinow theorem. The fact that the sets are closed follows directly from the definition of D_α^ℳ(ν). The fact that they are bounded is seen as follows; for α > 0, by construction D_α^ℳ(ν) ⊂ℳ. Therefore, if y ∈ D_α^ℳ(ν), necessarily δ_R(Id, y) < ∞, which follows by the fact that both Id and y are elements of ℳ, combined with <cit.>. Let (y_n)_n ∈ℕ be an unbounded sequence, such that ‖(y_n) ‖_F →∞ as n →∞. The divergence ‖(y_n) ‖_F →∞ implies in particular also that δ_R(Id, y_n) →∞, which violates the boundedness (or compactness) of D_α^ℳ(ν) for a ∈ (0,1], and therefore we must have lim_n →∞ZD_ℳ(ν, y_n) = lim_n →∞sup{α : y_n ∈ D^ℳ_α(ν) } = 0. §.§ Proof of Lemma <ref> By definition of the intrinsic zonoid trimmed regions D_α^ℳ(ν) = { y ∈ℳ : 0_d × d∈ D_α(ζ_y) } with D_α(ζ_y) as in eq.(<ref>). The distribution ζ_y has finite first moment with respect to the Riemannian metric in T_y(ℳ), since∫_T_y(ℳ)‖ z ‖_y ζ_y(dz)= ∫_ℳ‖_y(x) ‖_y ν(dx)= ∫_ℳδ_R(y, x) ν(dx) < ∞,using eq.(<ref>) in the main document and the fact that ν∈ P_2(ℳ) ⊂ P_1(ℳ). By <cit.> for a probability measure ζ_y defined on T_y(ℳ) ≅ℝ^d^2 with finite first moments, ⋃_α > 0 D_α(ζ_y)= conv_T_y(ℳ)(ζ_y),where conv_T_y(ℳ)(ζ_y) denotes the convex hull of the support of ζ_y in T_y(ℳ) ≅ℝ^d^2, based on the Riemannian metric on T_y(ℳ), i.e., a rescaled Euclidean metric. Using the above result, we write out:⋃_α > 0 D_α^ℳ(ν)= ⋃_α > 0{y ∈ℳ : 0_d × d∈ D_α(ζ_y) } = { y ∈ℳ :0_d × d∈∪_α > 0 D_α(ζ_y) } = { y ∈ℳ : 0_d × d∈conv_T_y(ℳ)(ζ_y) } = { y ∈ℳ : ∃g : supp(ν) → [0,1] measurable, s.t.∫_supp(ν)_y(x)g(x) λ(dx) = 0_d × d and ∫_supp(ν) g(x) λ(dx) = 1} = conv(ν),where the last step follows by definition conv(ν) as the geodesic convex hull of the support of ν on the manifold. §.§ Proof of Theorem <ref>§.§.§ Continuity in y (P.5) We argue that the map y ↦ZD_ℳ(ν, y) is both upper- and lower-semicontinuous for y ∈conv(ν). Upper-semicontinuity: the map is upper-semincontinuous if and only if for each α∈ [0,1] the sets { y ∈conv(ν) : ZD_ℳ(ν, y) < α} are open in conv(ν) or equivalently the sets { y ∈conv(ν) : ZD_ℳ(ν, y) ≥α} are closed in conv(ν). If α = 0, { y ∈conv(ν) : ZD_ℳ(ν, y) ≥α} = conv(ν), and conv(ν) is closed in itself. If α > 0, note that we can rewrite {y ∈conv(ν) : ZD_ℳ(ν, y) ≥α} = { y ∈conv(ν) : y ∈ D^ℳ_α(ν) }, since on the one hand, if y ∈ D^ℳ_α(ν), then ZD_ℳ(ν, y) = sup{β : y ∈ D^ℳ_β(ν) }≥α, and on the other hand, if ZD_ℳ(ν, y) = β≥α, then y ∈ D^ℳ_β(ν) ⊆ D^ℳ_α(ν) by nestedness of the intrinsic zonoid trimmed regions. For each α > 0, by construction D_α^ℳ(ν) is closed, therefore { y ∈conv(ν) : ZD_ℳ(ν, y) ≥α} is also closed. Lower-semicontinuity: the map is lower-semicontinuous if and only if for each α∈ [0,1] the sets { y ∈conv(ν) : ZD_ℳ(ν, y) ≤α} are closed in conv(ν) or equivalently the sets { y ∈conv(ν) : ZD_ℳ(ν, y) > α} are open in conv(ν). If α = 1, { y ∈conv(ν) : ZD_ℳ(ν, y) > α} = ∅, and the empty set is open in conv(ν). If α = 0, { y ∈conv(ν) : ZD_ℳ(ν, y) > α} = conv(ν) by Lemma <ref>, and conv(ν) is open in itself. If 0 < α < 1, note that we can rewrite {y ∈conv(ν) : ZD_ℳ(ν, y) > α} = {y ∈conv(ν) : y ∈ D^ℳ_α+(ν) }, where, D_α+^ℳ(ν) := { y ∈ℳ : y = _y(∫_ℳ_y(x) g(x) ν(dx) ), g:ℳ→ [0, 1/α),∫_ℳ g(x) ν(dx) = 1 }, with g measurable. To see that the set-equivalence is true: on the one hand, if y ∈ D^ℳ_α+(ν), then ZD_ℳ(ν, y) = sup{β : y ∈ D_β^ℳ(ν) } > α, since [0, 1/α) ⊂ [0, 1/α]. On the other hand, if ZD_ℳ(ν, y) = β > α, take ϵ > 0 sufficiently small such that β > β - ϵ > α, then [0, 1/β] ⊂ [0, 1/β - ϵ) ⊂ [0, 1/α). As a consequence, y ∈ D^ℳ_β(ν) ⊆ D^ℳ_α+(ν) by nestedness of the intrinsic zonoid trimmed regions. For 0 < α < 1, distinguish between two cases: (i) D^ℳ_α+(ν) = conv(ν), then the set is open as conv(ν) is open in itself, (ii) D^ℳ_α+(ν) ⊂conv(ν). In this case, writing r∂ D^ℳ_α+(ν) for the relative boundary of the geodesic convex set D_α+^ℳ(ν) in conv(ν), we note that r∂ D^ℳ_α+(ν) = r∂ D^ℳ_α(ν). Here, the relative boundary of D_α^ℳ(ν) is characterized by those points in D_α^ℳ(ν) for which the weighting function g attains the maximum value 1/α. Since D^ℳ_α+(ν) ∩ r∂ D^ℳ_α+(ν) = D^ℳ_α+(ν) ∩ r∂ D^ℳ_α(ν) = ∅, it follows that D^ℳ_α+(ν) is open. By combining the above arguments, we conclude that { y ∈conv(ν) : ZD_ℳ(ν, y) > α} is open for each α∈ [0,1].Since the map y ↦ZD_ℳ(ν, y) is both upper- and lower-semicontinuous on conv(ν) it is also continuous on conv(ν).§.§.§ Uniform continuity in ν (P.6) Pointwise convergence of depths: first, we show pointwise convergence of ZD_ℳ(ν_n, y) to ZD_ℳ(ν, y) for each y ∈rint(conv(ν)), where rint(conv(ν)) denotes the relative interior of the geodesic convex set conv(ν). We note that y ∈rint(conv(ν)) if and only if 0_d × d∈rint(conv_T_y(ℳ)(ζ_y)), where conv_T_y(ℳ)(ζ_y) is the convex hull of the support of ζ_y in T_y(ℳ) as in the proof of Lemma <ref>. This is seen as follows: by Lemma <ref>, y ∈conv(ν) if and only if ∃ α > 0, such that y ∈ D_α^ℳ(ν), but this is equivalent to 0_d × d∈ D_α(ζ_y) which holds if and only if 0_d × d∈conv_T_y(ℳ)(ζ_y) by <cit.>. Since the sets { y : y ∈conv(ν) } and { y : 0_d × d∈conv_T_y(ℳ)(ζ_y) } are equivalent their relative interiors are equivalent as well. By Definition <ref>, ZD_ℳ(ν_n, y) = ZD_ℝ^d^2(ζ^n_y, 0⃗), where ζ^n_y is the distribution of _y(X) as a d^2-dimensional real basis component vector, with X ∼ν_n, such that ζ_y^n w→ζ_y. Similarly, ZD_ℳ(ν, y) = ZD_ℝ^d^2(ζ_y, 0⃗). By the same argument as in the proof of Lemma <ref>, we know that ζ^n_y, ζ_y ∈ P_1(T_y(ℳ)) for each n ∈ℕ, where P_1(T_y(ℳ)) denotes the set of probability measures on T_y(ℳ) with finite first moment, i.e., if ζ∈ P_1(T_y(ℳ)) then ∫_T_y(ℳ)‖ z ‖_ydζ_y(z) < ∞. Furthermore, the sequence of measures (ζ^n_y)_n ∈ℕ is uniformly integrable with respect to the Riemannian metric in T_y(ℳ), since for any y ∈ℳ,lim_K →∞sup_n ∈ℕ∫_T_y(ℳ)‖ z ‖_y 1_{‖ z ‖_y > K } ζ^n_y(dz) =lim_K →∞sup_n ∈ℕ∫_ℳδ_R(y,x) 1_{δ_R(y, x) > K } ν_n(dx) = 0.By <cit.>, under these conditions, for y ∈rint(conv(ν)) or equivalently 0_d × d∈rint(conv_T_y(ℳ)(ζ_y)), it follows that, ZD_ℳ(ν_n, y) = ZD_ℝ^d^2(ζ^n_y, 0⃗)→ ZD_ℝ^d^2(ζ_y, 0⃗)=ZD_ℳ(ν, y), asn →∞.Uniform convergence of depths: uniform depth convergence now follows from the pointwise depth convergence above by a generalized version of the proof of <cit.> for the complete metric space (ℳ, δ_R), using Lemma <ref> and the fact that ZD_ℳ(ν, y) is a normed geodesically convex depth, continuous in y by the first part of Theorem <ref>. Since the proof is completely analogous to the proof of <cit.>, we omit the details here. Note that the only required modification is to replace the Euclidean metric space by the complete metric space (ℳ, δ_R). In particular, Euclidean open balls, convex sets and convergence are replaced by geodesic open balls, geodesic convex sets and convergence in the Riemannian distance function respectively.By the generalized proof of <cit.>, the depths (ZD_ℳ(ν_n, y_0))_n ∈ℕ are continuously convergent for y_0 ∈rint(conv(ν)). That is, for y_n → y_0 in the sense that δ_R(y_n, y_0) → 0, also lim_n →∞ZD_ℳ(ν_n, y_n) = ZD(ν, y_0). By <cit.>, since ℳ is a metric space, continuous convergence of the depths implies compact convergence, i.e., for every compact set M ⊆rint(conv(ν)),lim_n →∞sup_y ∈ M |ZD_ℳ(ν_n, y) - ZD_ℳ(ν, y)|=0.Consequently, by <cit.>, compact convergence implies uniform convergence, since the arguments in the proof of <cit.> continue to hold for the intrinsic zonoid depth defined on the complete metric space ℳ, where closed and bounded subsets are compact.§.§ Proof of Theorem <ref> and Proposition <ref> Properties P.1–P.4 follow directly by Theorem <ref>, using the definition of the depth as the integrated pointwise zonoid depth (integrated over t ∈ℐ). For the first part (P.5) of Proposition <ref>: using that sup_t ∈ℐδ_R(y_n(t), y(t)) → 0, by the first part of Theorem <ref>, ZD_ℳ(ν(t), y_n(t)) →ZD_ℳ(ν(t), y(t)) uniformly over t ∈ℐ. By definition of the integrated intrinsic zonoid depth also,|iZD_ℳ(ν, y_n) - iZD_ℳ(ν, y)|≤ ∫_ℐ |ZD_ℳ(ν(t), y_n(t)) - ZD_ℳ(ν(t), y(t))| dt→ 0,by the pointwise convergence and the fact that the depth function ZD_ℳ(·, ·) ∈ [0,1] is bounded. For the second part (P.6) in Proposition <ref>: under the given assumptions, by the second part of Theorem <ref>,sup_y(t) ∈rint(conv(ν(t)) |ZD_ℳ(ν_n(t), y(t)) - ZD_ℳ(ν(t), y(t))|→0, uniformly fort ∈ℐ,and similarly as above,sup_y ∈rint(conv(ν)) | iZD_ℳ(ν_n, y) - iZD_ℳ(ν, y)|≤ sup_y ∈rint(conv(ν))∫_ℐ | ZD_ℳ(ν_n(t), y(t)) - ZD_ℳ(ν(t), y(t))| dt→ 0,using the pointwise convergence and the fact that the depth function ZD_ℳ(·, ·) ∈ [0,1] is bounded.§.§ Proof of Theorem <ref> P.1 This follows directly from the definition of the depth by the fact that the map x ↦ a ∗ x with a ∈GL(d, ℂ) is distance preserving, i.e., δ_R(a ∗ x, a ∗ y) = δ_R(x, y) for each x, y ∈ℳ.P.2 Since ∫_ℳδ_R(y, x) ν(dx) ≥ 0 and exp(-z) is strictly decreasing in z ≥ 0, the point of maximum depth is attained at y = min_z ∈supp(ν)∫_ℳδ_R(z, x) ν(dx). By eq.(<ref>) in the main document, on the Riemannian manifold ℳ with ν∈ P_1(ℳ), this point is the uniquely existing geometric median of the distribution ν.P.3 By the proof of <cit.> and an application of Leibniz's integral rule, y ↦E_ν[δ_R(y, X)] is a (strictly) convex function, and by P.2 it attains its unique minimum at m := GMν(X). This implies that E_ν[δ_R(_m(th), X)] is a nondecreasing function for t ≥ 0, where _m(th) is a geodesic curve emanating from m with unit tangent vector h. As a consequence GDD(ν, _m(th)) = exp( - E_ν[δ_R(_m(th), X)]) is monotone non-increasing for t ≥ 0.P.4 Let (y_n)_n ∈ℕ be an unbounded sequence such that ‖(y_n) ‖_F →∞ as n →∞, then also δ_R(y_n, x) = ‖(x^-1/2∗ y_n) ‖_F →∞ for each x ∈ℳ, and as a consequence GDD(ν, y_n) =exp(- E_ν[δ_R(y_n, X)]) → 0.§.§ Proof of Theorem <ref>§.§.§ Continuity in y (P.5) First, suppose that (y_n)_n ∈ℕ is an unbounded sequence ‖(y_n) ‖_F →∞ as n →∞, i.e., y_n → y, where y is a singular matrix. Since GDD(ν, y) = 0, by P.4 in Theorem <ref>, lim_n →∞GDD(ν, y_n) = GDD(ν, y). Second, suppose that (y_n)_n ∈ℕ is a bounded sequence, i.e., sup_n ∈ℕ‖(y_n) ‖_F = sup_n ∈ℕδ_R(y_n, Id) < ∞. Since ν∈ P_1(ℳ), there exists an y_0 ∈ℳ such that ∫_ℳδ_R(y_0, x) ν(dx) < ∞. By the triangle inequality, ∫_ℳsup_n ∈ℕδ_R(y_n, x) ν(dx)≤ sup_n ∈ℕδ_R(y_n, Id) + δ_R(Id, y_0) +∫_ℳδ_R(y_0, x) ν(dx)< ∞, using that δ_R(y_0, Id) < ∞ as both Id and y_0 are elements of ℳ, (see <cit.>). We show continuity directly from the definition of the geodesic distance depth. The function z ↦exp(-z) is continuous in z, also the function z ↦δ_R(z, x) is continuous in z, since δ_R(z, x) = ‖(x^-1/2∗ z) ‖_F is a composition of continuous functions. Furthermore, by the dominated convergence theorem, lim_n →∞∫_ℳδ_R(y_n, x) ν(dx) = ∫_ℳlim_n →∞δ_R(y_n, x) ν(dx), since ∫_ℳsup_n ∈ℕδ_R(y_n, x) ν(dx) < ∞. Combining these arguments, lim_n →∞GDD(ν, y_n) = GDD(ν, y). §.§.§ Uniform continuity in ν (P.6) We start by noting that the uniform integrability condition implies in particular that ν_n ∈ P_1(ℳ) for each n ∈ℕ. Also, since z ↦δ_R(y, z) is continuous in z, by the continuous mapping theorem δ_R(y, X_n) d→δ_R(y, X), with X_n ∼ν_n and X ∼ν, and by Vitali's convergence theorem ∫_ℳδ_R(y, x) ν_n(dx) →∫_ℳδ_R(y, x) ν(dx) for any y ∈ℳ. Note that the convergence implies in particular also that ν∈ P_1(ℳ). For two measures μ, ν∈ P_1(ℳ) define their L^1-Wasserstein distance as: W_1(μ, ν)= inf_γ∈Γ(μ, ν)∫_ℳ×ℳδ_R(y, x) γ(dy, dx),where Γ(μ, ν) denotes the collection of all probability measures on ℳ×ℳ with marginal measures μ and ν. Substituting μ = δ_y, the point measure in y, it follows that W_1(δ_y, ν) = ∫_ℳδ_R(y, x) ν(dx). Therefore, a sufficient condition to ensure uniform convergence in y ∈ℳ of ∫_ℳδ_R(y, x) ν_n(dx) to ∫_ℳδ_R(y, x) ν(dx), is W(ν_n, ν) → 0, sincesup_y ∈ℳ|∫_ℳδ_R(y, x) ν_n(dx) - ∫_ℳδ_R(y, x) ν(dx) | =sup_y ∈ℳ| W_1(δ_y, ν_n) - W_1(δ_y, ν) |≤W_1(ν_n, ν),where the last step follows by the reverse triangle inequality for the L^1-Wasserstein distance. The manifold ℳ is a complete separable metric space, and therefore by <cit.> a necessary and sufficient condition for W_1(ν_n, ν) → 0 is that the sequence of probability measures ν_n converges weakly in P_1(ℳ) to ν, i.e., (i) ν_n w→ν and (ii) ∫_ℳδ_R(y, x) ν_n(dx) →∫_ℳδ_R(y, x) ν(dx) for any y ∈ℳ. Condition (i) holds by assumption, and condition (ii) has already been shown above. The function z →exp(-z) is uniformly continuous for z ≥ 0, therefore the uniform convergence of the geodesic distance depth follows as well since,sup_y ∈ℳ |GDD(ν_n, y) - GDD(ν, y)|= sup_y ∈ℳ|exp(-E_ν_n[δ_R(y, X)]) - exp(-E_ν[δ_R(y,X)])|n →∞→0.§.§ Proof of Theorem <ref> and Proposition <ref> Properties P.1–P.4 follow directly by the pointwise depth properties in Theorem <ref>, using the definition of the depth in terms of the integrated Riemannian distance (integrated over t ∈ℐ). For the first part (P.5) of Proposition <ref>: using that sup_t ∈ℐ(δ_R(y_n(t), y(t)) → 0, by the first part of the proof in Theorem <ref> also, sup_t ∈ℐ| E_ν(t)[ δ_R(y_n(t), X)] - E_ν(t)[δ_R(y(t), X)] |n →∞→0,and as a direct consequence lim_n →∞∫_ℐE_ν(t)[δ_R(y_n(t), X)]dt = ∫_ℐE_ν(t)[δ_R(y(t), X)]dt. Using again that z ↦exp(-z) is continuous in z, the composition converges as well and we conclude that lim_n →∞iGDD(ν, y_n) = iGDD(ν, y).For the second part (P.6) of Proposition <ref>. Denote by ξ_n,y(t) and ξ_y(t) respectively the distributions of δ_R(y(t), X_n(t)) and δ_R(y(t), X(t)), such that X_n(t) ∼ν_n(t) and X(t) ∼ν(t). Let ϕ :ℝ→ℝ be a continuous and bounded function and write y ∈ℳ for a curve with y(t) ∈ℳ for each t ∈ℐ. Then for any curve y ∈ℳ,sup_t ∈ℐ | E_ξ_n,y(t)[ϕ(X)] - E_ξ_y(t)[ϕ(X)]| =sup_t ∈ℐ | E_ν_n(t)[ϕ(δ_R(y(t), X))] - E_ν(t)[ϕ(δ_R(y(t), X))] |n →∞→0,where the last step follows by the fact that for each t ∈ℐ the composition x ↦ϕ(δ_R(y(t), x)) is again a continuous and bounded function, and the fact that ν_n(t) w→ν(t) uniformly in t. Thus, for any curve y ∈ℳ, the weak convergence ξ_n,y(t) w→ξ_y(t) holds as well uniformly in t. By the uniform integrability of (ν_n(t))_n ∈ℕ uniformly in t, combined with Vitali's convergence theorem, it follows that for each curve y ∈ℳ,sup_t ∈ℐ | E_ν_n(t)[ δ_R(y(t), X) ] - E_ν(t)[ δ_R(y(t), X) ]|→0, asn →∞.By the same argument as in the second part of the proof of Theorem <ref>, a sufficient condition for uniform convergence in y ∈ℳ of ∫_ℐE_ν_n(t)[δ_R(y(t), X)]dt to ∫_ℐE_ν(t)[δ_R(y(t), X)]dt is the condition sup_t ∈ℐ W_1(ν_n(t), ν(t)) → 0. Again by <cit.>, the convergence sup_t ∈ℐ W_1(ν_n(t), ν(t)) → 0 is implied by the conditions (i) ν_n(t) w→ν(t) uniformly in t, which holds by assumption and (ii) the convergence in eq.(<ref>) pointwise in y ∈ℳ.The function z →exp(-z) is uniformly continuous for z ≥ 0, therefore the uniform convergence of the integrated geodesic distance depth follows as well,sup_y ∈ℳ |iGDD(ν_n, y) - iGDD(ν, y)| = sup_y ∈ℳ|exp(-∫_ℐE_ν_n(t)[δ_R(y(t), X)]dt ) - exp(- ∫_ℐE_ν(t)[δ_R(y(t), X)]dt) |n →∞→ 0. §.§ Proof of Proposition <ref> First, we verify that e_2(X) ≤ 1/2.Let Y_1 = … = Y_n = p ∈ℳ be n contaminating observations, such that ‖(p) ‖_F ≥ N for some N > 0. Denote ν_n,n for the empirical distribution of the contaminated sample Z^(n,n) = {X_1,…,X_n}∪{Y_1,…,Y_n}. For each x ∈{X_1,…,X_n }, D(Y_1, ν_n,n) =exp(-∑_i=1^n δ_R(p, X_i))≥ exp(-∑_i=1^n δ_R(X_i, x) - ∑_i=1^n δ_R(x,p))=D(x, ν_n,n),using the triangle inequality δ_R(p, X_i) ≤δ_R(p, x) + δ_R(x, X_i) for each i=1,…,n. Since Y_1 = … = Y_n, D(Y_1, ν_n,n) = … = D(Y_n, ν_n,n) ≥ D(x, ν_n,n) for each x ∈{X_1,…,X_n}. Therefore, ‖(Z^(n,n)_[1]) ‖_F = … = ‖(Z^(n,n)_[n] )‖_F = ‖(p) ‖_F ≥ N, with Z_[i]^(n,n) the i-th depth ranked observation in the sample Z^(n,n). As we can choose p ∈ℳ, such that ‖(p) ‖_F ≥ N for N arbitrarily large, ‖(Z^(n,n)_[i]) ‖_F with 1 ≤ i ≤ n can be made arbitrarily large by adding n contaminating observations. This implies that ϵ_2(X) ≤ n/(2n) = 1/2.Second, we verify that ϵ_2(X) ≥ 1/2.Consider the contaminated sample Z^(n,m) = {X_1,…,X_n}∪{Y_1,…,Y_m}, with m < n. If we can show that D(y, ν_n,m) < D(x, ν_n,m) for each y ∈{Y_1,…,Y_m} and each x ∈{X_1,…,X_n}. Then ∀i ∈{1,…,n}, ∃ j ∈{1,…,n}, such that Z^(n,m)_[i] = X_j and consequently max_i ‖(Z_[i]^(n,m)) ‖_F ≤ M, denoting M := max_i ‖(X_i)‖_F. The latter implies that it takes at least m ≥ n contaminating observations to make ‖(Z^(n,m)_[i]) ‖_F arbitrarily large for 1 ≤ i ≤ n, i.e., ϵ_2(X) ≥ 1/2. It remains to show that D(y, ν_n,m) < D(x, ν_n,m) for each y ∈{Y_1,…,Y_m} and each x ∈{X_1,…,X_n}.Let y ∈{Y_1,…,Y_n} and x ∈{X_1,…,X_n} arbitrary, then:D(y, ν_n,m) < D(x, ν_n,m) ⇔∑_i=1^m δ_R(y, Y_i) + ∑_i=1^n δ_R(y, X_i) > ∑_i=1^m δ_R(x, Y_i) + ∑_i=1^n δ_R(x, X_i).Let us denote R := max_i δ_R(x, X_i), B := { p ∈ℳ: δ_R(p,x) ≤ 2R } and ρ = inf_p ∈ Bδ_R(p, y).First, by the triangle inequality δ_R(x, y) ≤ 2R + ρ. Therefore, by the reverse triangle inequality, ∀i=1,…,m,δ_R(y, Y_i)≥ δ_R(x, Y_i) - δ_R(x, y)≥ δ_R(x, Y_i) - (2R + ρ). Also, by definition of R and ρ, ∀i=1,…,n, δ_R(y, X_i)≥ R + ρ ≥ δ_R(x, X_i) + ρ.Without loss of generality, assume that min_i ‖(Y_i) ‖_F ≥ N, where N ≥ 2(n+1)R + M. Denoting Id for the identity matrix, it follows that,ρ =inf_p ∈ Bδ_R(y, p)≥ δ_R(y, Id) - sup_p ∈ Bδ_R(p, Id)≥N - sup_p ∈ B (δ_R(p, x) + δ_R(x, Id))≥2(n+1)R + M - (2R + M)=2nR.Here, we used two triangle inequalities and the fact that ‖(z) ‖_F = δ_R(z, Id) by definition of the Riemannian distance. Combining eq.(<ref>-<ref>) above yields:∑_i=1^m δ_R(y, Y_i) + ∑_i=1^n δ_R(y, X_i)≥ ∑_i=1^m (δ_R(x, Y_i) + (2R + ρ)) + ∑_i=1^n (δ_R(x, X_i) + ρ)≥-2mR + (n-m)ρ + ∑_i=1^m δ_R(x, Y_i) + ∑_i=1^n δ_R(x, X_i) >∑_i=1^m δ_R(x, Y_i) + ∑_i=1^n δ_R(x, X_i),where we used that -2mR + (n-m)ρ > -2nR + ρ≥ 0 by the fact that m < n and ρ≥ 2nR by eq.(<ref>). Returning to eq.(<ref>), it follows that D(y, ν_n,m) < D(x, ν_n,m). As this result holds for any y ∈{Y_1,…,Y_m} and x ∈{X_1,…,X_n}, we conclude that ϵ_2(X) ≥ 1/2. Since also ϵ_2(X) ≤ 1/2, it follows that ϵ_2(X) = 1/2, which finishes the proof. § APPENDIX II: ADDITIONAL FIGURES
http://arxiv.org/abs/1706.08289v3
{ "authors": [ "Joris Chau", "Hernando Ombao", "Rainer von Sachs" ], "categories": [ "stat.ME", "62G30, 62G15, 62G35, 62M15" ], "primary_category": "stat.ME", "published": "20170626090333", "title": "Intrinsic data depth for Hermitian positive definite matrices" }
Deuteron: properties and analytical forms of wave function in coordinate spaceV. I. ZhabaUzhgorod National University, Department of Theoretical Physics,54, Voloshyna St., Uzhgorod, UA-88000, Ukraine(Received June 26, 2017)Key words: Deuteron; wave function; approximation; analytic form; polarization. PACS: 13.40.Gp, 13.88.+e, 21.45.Bc, 03.65.Nk Abstract Static parameters of the deuteron, obtained by the wave functions for various potential models, have been chronologically systematized. The presence or absence of knots near the origin of coordinates for the radial wave function of the deuteron have been shown. Analytical forms for the deuteron wave function in coordinate space have been reviewed. Both analytical forms and parameterizations of the deuteron wave function, which are necessary for further calculations of the characteristics of the processes involving the deuteron, have been provided. In addition, the asymptotic behaviors of deuteron wave function near the origin of coordinates and for large values of distance have been analyzed in the paper. Minimization of the number of numerically calculated coefficients for new analytical forms as a product of exponential function r^n by the sum of the exponential terms A_i*exp(-a_i*r^3) have been done. The optimum is N=7-10. 1. Introduction Deuteron is the most elementary nucleus. He consists of the two strongly interacting elementary particles: a proton and a neutron. The simplicity and evidentness of the deuteron's structure makes it a convenient laboratory for studying and modeling nucleon-nucleon forces. Now, deuteron has been well investigated both experimentally and theoretically.The experimentally determined values of static properties of the deuteron are in very much good agreement with the experimental data. Owever despite that, there still are some theoretical inconsistencies and problems. For example, in latest papers one (for OBE <cit.>,Bonn <cit.> potentials) or both (for Soft core Reid68 <cit.>,Moscow<cit.>, renormalized OPE and TPE chiral <cit.> potentials)components of the radial wave function in coordinate space have knots near the origin of the coordinates. The existence of knots in the wave functions of the basic and sole state of the deuteron is the evidence of inconsistencies and inaccuracies in implementation of numerical algorithms in solving similar problems. Or it is connected with features of potential models for the description of a deuteron. The way the choice of numerical algorithms influences the solution is shown in Refs. <cit.>. The knots of the wave function in coordinate representation are analyzed in more detail in the following sections of the article.Besides, it should be noted that the deuteron wave function in momentum space in the scientific literature is presented ambiguously. In particular, in the S- component <cit.> (or in S- and D- components <cit.>), there is an excess knot in the middle of interval for values of momentum.It should also be noted that such potentials of the nucleon-nucleon interaction as Bonn <cit.>, Moscow <cit.>), Nijmegen group potentials (NijmI, NijmII, Nijm93 <cit.>), Argonne v18 <cit.>, Paris <cit.>, NLO, NNLO and N3LO <cit.>, Idaho N3LO <cit.> or Oxford potential <cit.> have quite a complicated structure and cumbersome representation. Example, the original potential Reid68 was parameterized on the basis of the phase analysis by Nijmegen group and was called as updated regularized version - Reid93. The parametrization was done for 50 parameters of the potential, where value χ^2/N_data=1.03 <cit.>.Besides, the deuteron wave function (DWF) in coordinate space can be presented as a table: through respective arrays of values of radial wave functions. It is sometimes quite difficult and inconvenient to operate with such arrays of numbers during numerical calculations. And the program code for numerical calculations is bulky, overloaded and unreadable. Therefore, it is feasible to obtain simpler and comfortabler analytical forms of DWF representation. It is further possible on the basis to calculate the form factors and tensor polarization, characterizing the deuteron structure.DWFs in a convenient form are necessary for use in calculations of polarization characteristics of the deuteron, as well as to evaluate the theoretical values of spin observables in dp scattering <cit.>.In addition to introduction, the first section and conclusions, the article is composed of six more sections. The second section deals with the deuteron wave function: main peculiarities and scientific interest in its studying. The third section describes the basic properties of the deuteron. The numerical values of theoretical calculation results and experimental data are presented in convenient tables. The fourth and the fifth sections provide a description of basic analytical forms of DWF in the coordinate representation. The sixth section describes the "improved" analytical forms of DWF. The seventh section suggests new analytical forms of DWF used in modern scientific literature. Coefficients for new analytical forms in the form r^n*A_i*exp(-a_i*r^3) have been calculated.The main objectives of the research in this paper are to systematize the analytical forms of DWF in the coordinate representation, calculate and analyze the coefficients for new analytical forms. 2. Deuteron wave function Wave function describes quant-mechanical system and is the basic characteristic of microobjects. Knowledge of deuteron wave function allows receiving the maximal information on system and theoretically to calculate the characteristics measured on experiment. Deuteron wave function find as the decision of system of coupled Schrodinger equations.Deuteron wave functions write down as the sum of wave functions for ^3S_1- and ^3D_1- state <cit.> Ψ _d = ψ _S + ψ _D = u(r)/rY_101^1 + w(r)/rY_121^1 , where u(r) and w(r) are radial deuteron wave functions for states with the orbital moments l=0 and 2; Y_JLS^M (θ ,ϕ ) are spherical harmonics, that are determined by orbital moment L, spin S, the full moment J=L+S and his projection M to an axis z. For deuteron: S=1; J=M=S=1.The condition of normalization for DWF Ψ _d can be written down as p_S + p_D = ∫_0^∞( u^2(r) + w^2(r))dr = 1, where p_S and p_D are probabilities to find out deuteron in S- and D- state accordingly.Taking into account spherical harmonics, it is possible to write down system of the coupled differential equations of the second order for deuteron {[ d^2u/dr^2 + (- k^2 - U_1 )u = √(8 ·) U_T w,; d^2w/dr^2 + (- k^2 - 6/r^2 - U_2 )w = √(8) U_T u.; ]. Here U_1, U_2 are normalized potentials of channels l=0; 2;U_3 are tensor component NN- interaction; U_i (r) = 2μ/ħ ^2V_i (r); k^2 = 2μ/ħ ^2E is wave number.About the beginning of coordinates wave function D- state w(r) has small value, because the repellent centrifugal barrier ħ ^2l(l + 1)/mr^2 will prevail on small distances. Outside of radius for action of forces the behaviour for w(r) also is determined by this barrier which sets asymptotic as <cit.>: w(r) ∼ Cexp(- γ r)[ 1 + 3/γ r + 3/( γ r)^2]. In paper <cit.> it was specified that one can divide the main models into four categories: 1) the models based on quantum chromo dynamics; 2) the effective field theory is another outstanding approach to NN problem; 3) the boson exchange models; 4) the almost pure phenomenological NN potentials. Last decades the second and fourth groups of potentials are more often and are more intensively used for the description of properties for deuteron and character of his interaction with easy nucleus.On Fig. 1 is shown interest of researchers to deuteron and to its properties according to the quoted literature in this article. Obvious not fading interest. It is connected first of all to studying those processes and interactions where the direct participant is deuteron. And knowledge its DWF is necessary for a substantiation and an explanation of corresponding models. Thus it is necessary to interpret the received experimental data, in particular tensor polarization.width 135mm Graph1.JPGFig. 1. Interest of researchers to deuteron3. Deuteron properties Based on the known DWFs one can calculate the deuteron properties:deuteron radius r_m r_d = 1/2{∫_0^∞r^2[ u^2(r) + w^2(r)]dr}^1 / 2; the quadrupole moment Q_d Q_d = 1/20∫_0^∞r^2w(r)[ √(8) u(r) - w(r)]dr ; the magnetic moment μ _d μ _d = μ _s - 3/2(μ _s - 1/2)P_D ; the D- state probability P_D P_D = ∫_0^∞w^2(r)dr ; the “D/S- state ratio” η η = A_D / A_S ; the triplet effective range ρ.In a formula for μ_d size μ _s = μ _n + μ _p is the sum of the magnetic moments of a neutron and proton. Value of the calculated magnetic moment of a deuteron is given in nuclear magnetons μ_N.Values of these static properties for deuteron that were designed for different potential models or wave functions of a various origin are resulted in Table 1. Knots for radial DWFs u(r) and w(r) are designated as r_u and r_w.Table 1. Deuteron properties Years Potential or DWFr_u (fm)r_w (fm)E_d (MeV)r_m (fm)Q_d (fm^2)P_D (%)ηA_S (fm^-1/2)Ref. 1940Neutral theory (zero cut-off)--0.2706.8<cit.>1940Neutral theory (straight cut-off)--0.2616.63<cit.> 1941Results of Rarita-Schwinger--2.173.9<cit.> 1954Results of Brueckner-Watson (V_T=-500 MeV)1.970.3257.60<cit.> 1954... (V_T=300 MeV)1.860.2775.10<cit.>1955Trial functions2.2270.287.11<cit.>1955Gartenhaus DWF--0.29-0.3086.8-7.0<cit.> 1956Pion-theoretical wave function0.40.40.285-80.0245<cit.>1956Variational wave function--17<cit.> 1958Hulthen type DWF-<cit.> 1959GT-Potential0.40.42.2880.2636.3<cit.> 1960Hamada--1.70.2736.70.0258<cit.> 1960Pion-theoretical DWF-0.150.267<cit.> 1961Hamada--9.90.029<cit.> 1962Hamada-Johnston--2.2260.2856.970.02656<cit.> 1963Martin's method0.1374<cit.> 1964Hulthen wave function0.15- <cit.>1964separable potential--2.2253.2 <cit.>1966Hamada-Johnston (analytic)--0.28270.0269<cit.>1966Hamada-Johnston (Hulthen)-- <cit.> 1966Hamada-Johnston-Partovi0.50.5 <cit.> 1966Soft core-2.227<cit.> 1968Relative harmonic oscillator basis--2.10.3253.6 <cit.>1968Effective nucleon-nucleon potential (A, B, F variants)1.99; 2.20; 2.130.272; 0.266; 0.2271.94; 1.97; 2.59<cit.> 1968Soft core Reid680.010.012.22460.279646.46960.026220.87758 <cit.> 1968Hard core Reid680.380.382.22460.2776.4970.02590.88034 <cit.>1969Non-static OBEP (set 1)--2.20.266.3<cit.>1969... (set 2)2.30.255.4<cit.>1970Modified HJ v10.40.42.2260.28456.9530.02642<cit.> 1970Modified HJ v30.40.42.22560.28676.9640.02674<cit.> 1970Modified HJ v90.40.42.26800.28697.0500.02768 <cit.>1971Velocity dependent potentials from the various models: distributed mass scalar--2.2240.2754.6 <cit.>1971L^2 force--2.2240.2624.0<cit.> 1971Contact term--2.2240.2584.9<cit.> 1971Phenomenological charge dependent--2.2240.2404.1 <cit.>1972OBEP2.20.266.3<cit.> 1973Local nucleon-nucleon potential A2.2240.2624.43 <cit.>1973... B2.2240.2625.25 <cit.>1973... C2.2240.2795.45 <cit.>1973UT1010.6; 0.80.6; 0.80.279<cit.> 1973UT1020.70.70.279 <cit.>1973UT1030.6; 0.90.6; 0.90.279 <cit.>1974Boundary condition model2.22620.27745.200.026170.8858 <cit.>1974Reid hard core2.22470.27696.490.025840.8774<cit.> 1974Yale 2.19390.27576.950.025050.8804<cit.> 1974Hamada-Johnston2.27100.28377.020.026860.8921<cit.> 1974Bryan-Scott potential2.18410.25895.440.023750.8687 <cit.>1974Ueda-Green I potential1.95560.28115.470.022910.8455<cit.> 1974Ueda-Green I potential2.20520.27976.010.025670.8881 <cit.>1974Ueda-Green III potential2.53150.26054.930.028170.9349 <cit.>1974Separable potential2.2230.28870.0437<cit.> 1975Approximation for Yale potential2.18880.2766.95 <cit.>1975RSC--0.2806.47<cit.> 1975RHC0.50.50.2776.50 <cit.>1975HJ potential0.50.50.2846.95 <cit.>1975RHC+Baker transf. of u(r)-0.2766.50<cit.>1975RSC+u-w twist-1.20.2684.35 <cit.> 1975RSC+UT1010.80.80.2796.47<cit.> 1975OBEP HM2.2241.860.2845.75 <cit.>1975OBEP SCH2.9101.790.2494.85<cit.> 1975OBEP GTG2.9851.760.2524.88 <cit.>1975OBEP UNG2.5111.810.2664.40 <cit.>1975Refitted OBEP SCH'2.2241.850.2845.82 <cit.>1975Refitted OBEP GTG'2.2231.850.2966.10 <cit.>1975Refitted OBEP GTG”2.2271.820.2855.67<cit.> 1975Meson exchange model F_0F_1'--2.2276.17<cit.> 1975One-boson-exchange potential0.480.482.2246445.920.0251 <cit.>1975OBEH(R)0.4-2.2310.27476.23 <cit.>1975OBEH(NR)2.2320.27215.57 <cit.>1975OBEG(R)--2.2270.27406.14 <cit.>1975OBEG(NR)2.2050.27205.58<cit.> 1975OBEV(R)--2.2050.27455.63<cit.> 1975OBEV(NR)2.2440.26985.23<cit.> 1975Super-soft-core potential2.22450.2825.92<cit.> 1976OBEP Holinde-Machleidt model--2.2241.860.2845.75 <cit.>1976OBEP Holinde-Machleidt model2.22461.790.28644.32<cit.> 1976Exact, Kim-Vasavada's, Brysk- Michalik's DWF- <cit.> 1977Analytic wave function --0.2884<cit.> 1977RSC potential with pion Compton wavelength0.30.30.2732- 0.27984.5-6.5 <cit.>1978Analytic wave function 0.25-0.50.3-0.5 <cit.>1978KLS--2.160.0930.32<cit.> 1978Graz I--2.2250.2882.63<cit.> 1978Mongan II-1.22.2230.2751.12<cit.> 1978Low-energy nucleon-nucleon potential from Regge-pole theory--0.27755.390.02550.8015 <cit.>1979Interactions in the core region0.50.50.2795.45<cit.> 1979Super soft-core potential--0.2795.45 <cit.>1979OBE (λ=0)0.2-4.74 <cit.>1979OBE (λ=0.4)0.2-4.78<cit.> 1979OBE (λ=1.0)0.250.53.60 <cit.>1979OBEP model2.224640.2846.360.02610.797<cit.> 1980Paris potential--2.22490.2795.770.02608<cit.>1980Four-component relativistic models0.30.2-0.6<cit.> 1980S potential0.2866.70.026 <cit.>1980SF potential0.2854.00.027 <cit.>1980QT interactions0.3524.10.038 <cit.>1981YY71.7220.2837.00.029 <cit.>1981YY41.7230.2834.00.029<cit.> 1981T4D-2 1.7440.2824.0-0.004<cit.> 1981T4D-11.2010.2824.0-0.004<cit.> 1981Urbana potential2.2250.2735.20.025<cit.> 1984PEST potential2.22490.2795.770.0261<cit.> 1984FSP0.50.52.22461.95490.27276.3150.025440.8766 <cit.>1984Mehdi-Gupta parametrization (shape-1)0.1978- 0.27452-6 <cit.>1984Mehdi-Gupta parametrization (shape-2)0.2252- 0.28132-6<cit.> 1984Argonne v14 --2.22500.2866.080.02660.845 <cit.>1984Argonne v28 --2.22500.2866.130.02650.846<cit.> 1985Realistic superdeep local NN-potential (Moscow)0.550.552.22461.96110.28606.780.02690.8814 <cit.>1986BEST potential2.225 0.28554.580.02670.8950 <cit.>1986Quark compound bag model (b=1.2 fm)5.330.026090.8945 <cit.>1986... (b=1.4 fm)4.660.026090.8757 <cit.>1986... (b=1.6 fm)4.260.026090.8884<cit.> 1986Positive short range tensor model potential-0.82.224641.97260.28600.026390.8847<cit.> 1987NN potentials with six-quark core radius b=1fm2.224621.960.2765.70.0258<cit.>1987... b=1.2fm2.224621.990.2865.30.0263 <cit.>1987Certov- Mathelitsch- Moravcsik DWFup 0.1up 0.11.959- 1.9750.2804;6;80.02610.88688<cit.> 1987Microscopic meson-quark cluster model (set A)--0.2665.23 <cit.>1987... (set B)--0.2685.33 <cit.>1987OBEP full model-0.32.22462.00160.28074.2490.026680.9046 <cit.>1987OBEPQ-0.04; 0.52.22461.96840.2744.380.02620.8862<cit.> 1987OPE0.250.2560.0262<cit.>1988Nonlocal potential (λ=5fm^ - 3)--2.224481.968800.239534.99890.021980.8861 <cit.>1988Nonlocal potential (λ=375fm^ - 3)0.50.82.224661.985470.302708.81810.025700.8856 <cit.>1988Phenomenological realistic DWF1.9530.2860.02680.8800 <cit.>1989OBEPA-0.05; 0.42.224521.96930.2744.380.02630.8867<cit.>1989OBEPB-0.022.224611.96880.2784.990.02640.8860<cit.> 1989OBEPC-0.012.224591.96740.2815.610.02660.8850<cit.> 1989Quark compound bag model QCB820.40.42.2245740.27775.340.025930.8891<cit.> 1989... QCB860.60.62.2245740.27865.470.025970.8894 <cit.>1990Quark cluster model (set A and B)--5.4; 4.9<cit.> 1990Quark compound bag model (b=1.2 fm)2.22491.97250.2795.300.02610.8874<cit.> 1990Quark compound bag model (b=1.35 fm)2.22491.97510.2784.660.02610.8889 <cit.>1991Padua potential--2.22491.97250.2795.30.02610.8874 <cit.>1992Full folded-diagram potential2.22440.27965.220.02640.8886<cit.>1992Moscow NNmodel0.65-2.22451.95920.28596.750.0269<cit.> 1993Nonlocal potential0.50.52.22421.9530.28626.5440.02870.8898<cit.> 1993Coupled-coupled folded-diagram potential2.22450.28525.580.02670.8927 <cit.>1994OPE (R=0.8906313)--1.93660.27515.8620.026530.86952 <cit.>1994Inversion potential--2.2245791.97020.28165.910.02640.8860<cit.> 1994Nijm-3--2.2245761.96720.27055.530.02520.8848 <cit.>1994Quantum inversion by Newton-Fulton (original)-1.32.2321391.850.2752.090.0180810.8269 <cit.>1994Newton-Fulton (wrong)-1.82.2321391.9350.09251.000.0180710.8753<cit.> 1994Newton-Fulton (correct)--2.2321391.9470.23106.770.0180810.8753 <cit.>1994Quark cluster model--2.22461.96574.910.02610.8765 <cit.>1994Nijm I2.2245750.27195.6640.02530.8841 <cit.>1994Nijm II2.2245750.27075.6350.02520.8845 <cit.>1994Reid 932.2245750.27035.6990.02510.8853<cit.> 1994Nijm 932.2245750.27065.7550.02520.8842 <cit.>1995Complex Kohn variational--2.22980.02634<cit.> 1995OBEPR, OBEPR(A), OBEPR(B)-0.2<cit.> 1995Argonne v18--2.224571.9670.2705.760.02500.8850<cit.>1995NijmI, NijmII, Reid93--<cit.> 1996SDA2.22461.9650.2753.59480.027150.885 <cit.>1996SDB2.22461.96490.27503.62330.027060.8850 <cit.>1996SDC2.22461.96460.27493.42020.027230.8849 <cit.>1996SDD2.22461.96570.27504.315l0.026470.8849 <cit.>1996Reid, Paris,Urbana, Argonne v18--<cit.> 1996Resonating-group method (RGM-F)2.2741.9330.27525.3910.0264 <cit.>1996FSS--2.2441.9660.28455.8790.0272 <cit.>1996RGM-H2.2241.9860.27504.9980.0251 <cit.>1996Effective chiral Lagrangian model fitted values (Λ=2.5fm^ - 1)2.150.2462.980.0229 <cit.>1996... (Λ=3.9fm^ - 1)-12.240.2492.860.0244 <cit.>1996... (Λ=5fm^ - 1)2.180.2372.40.023<cit.> 1998One solitary boson exchange potential (OSBEP)2.224591.95540.27286.00.02560.8788<cit.> 1998Moscow A0.50.52.22441.966.590.0267 <cit.>1998Moscow B0.50.52.22461.955.750.0258<cit.> 1998Moscow C0.50.52.22461.946.140.0262<cit.> 1999OPE0.80.82.2245891.9650.28595.860.02710.8836 <cit.>2000NLO--2.16501.9750.2663.620.02480.866 <cit.>2000NNLO1.1-2.22381.9670.2626.110.02450.884<cit.> 2000NNLO-Δ2.18491.9700.2685.000.02470.873<cit.> 2000Local NN Potential LP10.5-0.52-2.22461.9650.2715.620.02530.884 <cit.>2000... LP20.5-0.52-2.22461.9660.2745.750.02560.884 <cit.>2000... LP30.5-0.52-2.22461.9670.2796.000.02610.884<cit.> 2000... LP40.5-0.52-2.22461.9680.2856.230.02660.884 <cit.>2000... LP50.5-0.52-2.22461.9680.2906.560.02730.884<cit.> 2001Argonne V18-- <cit.>2001Bonn C--1.9680.28145.600.0266<cit.> 2001FSS2 (Isospin basis)--2.22501.95980.26965.4900.02527 <cit.>2001FSS2 (Particle basis, Coulomb off)2.22611.95990.26965.4900.02527 <cit.>2001FSS2 (Particle basis, Coulomb on)2.23091.95820.26945.4940.02531<cit.> 2001CD-Bonn-0.12.2245751.9660.2704.850.02560.8846 <cit.> 2001Separable potentials with the Laguerre form factors0.2-0.28195.7290.02520.8845 <cit.>2001Idaho-A--2.2245751.97560.2814.170.02560.8846<cit.> 2001Idaho-B--2.2245751.97580.2844.940.02550.8846 <cit.>2003Nij1 transformed0.60.4<cit.> 2003Nij2 transformed-- <cit.>2003DBS model NN0.60.52.224542.0040.2865.420.02590.9031<cit.> 2003DBS model NN + 6q2.224541.9720.2755.220.02640.8864<cit.> 2003Idaho N3LO (500)2.2245751.9780.2854.510.02560.8843<cit.> 2004Exponential potential0.5-2.22461.9600.2836.220.02650.881<cit.> 2004Modified Moscow2.224531.9560.2866.7760.02690.879<cit.> 2004ISTP v.0--2.2245751.98774.2710.02520.8845<cit.> 2004ISTP v.1--2.2245751.99975.6200.02520.8845<cit.> 2004ISTP v.2--2.2245751.96805.6960.02520.8629<cit.> 2005OPE-η (LO)1.94230.1321600.8752<cit.> 2005OPE-pert (NLO)1.64290.455500.0510.7373 <cit.>2005OPE-exactup 0.5up 0.51.93510.27627.880.026330.8681<cit.> 2005Nonrelativistic DWF--2.22452.1080.2859<cit.> 2005NLO--2.171- 2.1861.973- 1.9740.273- 0.2753.46- 4.290.0256- 0.02570.868- 0.873 <cit.>2005NNLO--2.189-2.2021.970-1.9720.271- 0.2753.53-4.930.0255-0.02560.874- 0.879 <cit.>2005N3LO0.5-2.216-2.2231.973-1.9850.264- 0.2682.73-3.630.0254-0.02550.882-0.883 <cit.>2006Moscow0.5-2.22461.96390.26740.027140.8892 <cit.>2007Moscow0.5-<cit.> 2007Renormalized OPE and TPE chiral potentialsup 0.5up 0.50.02633; 0.02564<cit.> 2007MT wave function--2.2249961.9720.27316.20.0253<cit.>2007JISP16--2.2245761.96470.29154.1360.02520.8629 <cit.>2008LO χET, NNLO χETup 0.5up 0.61.90- 2.060.276- 0.3596.98- 10.080.0251- 0.03020.845- 0.925 <cit.>2008OPE0.450.52.2245751.93510.27627.880.026340.8681 <cit.>2008HB-TPE set IV2.2245751.9670.2768Input0.884 <cit.>2008RB-TPE set IV0.1-0.60.552.2245751.85260.308722.990.031980.8226<cit.> 2008RB-TPE set η-0.5-0.82.2245751.967760.27495.590.025660.88426<cit.> 2009NNLO0.50.5<cit.> 2009LO0.2; 0.50.2; 0.51.93510.27627.310.026330.8681<cit.> 2009NLO-Δ0.1-0.60.1-0.81.9630.2745.90.884 <cit.>2009N2LO-Δ0.1-0.70.1-0.61.9800.2795.90.892<cit.> 2010Oxford potential2.22461.97670.28715.6040.02620.8918 <cit.>2011GWU PWA2.2245751.95570.28520.02560.8764 <cit.>2011Nijm PWA932.2245751.96730.28840.02560.8845<cit.> 2011Yakawa Potential-2.228 <cit.> 2012Hulthen wave function--<cit.> 2013δ shell potential1.96450.26795.620.024930.8829<cit.> 2014DWF in continuum basis--2.2106.3 <cit.>2014Coarse-grained NN potential with chiral two-pion exchange1.96890.26585.300.024730.8854<cit.>2014Statistical error analysis for potentials1.97440.26455.300.024480.8885<cit.> 2014Standard Wood-Saxon potential--1.95320.27696.659<cit.> 2014Generalized Wood-Saxon potential1.72690.28185.056<cit.> 2014Modified Wood-Saxon potential1.95320.28364.86<cit.> 2015Idaho N3LO (500)--2.22461.9750.2754.510.02560.8843<cit.>2015Juelich N3LO (550/600)--2.21961.9770.2663.280.02540.8820 <cit.> 2015Improved N3LO (R=0.8fm)0.50.82.22461.9700.2683.780.02550.8843 <cit.> 2015... (R=0.9mm)0.5-2.22461.9720.2714.190.02550.8845<cit.>2015... (R=1.0fm)--2.22461.9750.2754.770.02560.8845 <cit.> 2015... (R=1.1fm)--2.22461.9790.2795.210.02560.8846 <cit.> 2015... (R=1.2fm)--2.22461.9820.2835.580.02560.8846<cit.>2015FSS2--2.22061.9610.2705.520.0252 <cit.>2015Nonlocal potentials with chiral TPE including Δ resonances. Model a--2.2245751.9480.2574.940.02450.8777 <cit.>2015... Model b--2.2245741.9750.2685.290.02480.8904<cit.> 2015... Model c--2.2245751.9890.2695.550.02460.8964 <cit.> Experimental values <cit.> of static properties for deuteron it is specified in Table 2. Table 2. Experimental properties for deuteron PropertiesValuesRef. Spin1 Mean lifeStable Mass (u)2.01410219(11)<cit.> Mass (MeV)1875.61282(16)<cit.> Magneticm moment (μ_N)0.8574382308(72)<cit.> E_d (MeV)2.22456612(48)<cit.> r_m (fm)1.975(3)<cit.> Q_d (fm^2)0.2859(3)<cit.> η0.0256(4)<cit.> According to the General mathematical theorem on the number of knots of eigenfunctions of boundary value problems <cit.> the function describing the ground state of the particle becomes zero only at the ends of the interval, and inside it she knots will have.In paper <cit.> S.B.Dubovichenko considering the possibility of the existence of knots VFD. If we consider the deuteron as a six-quark system, in accordance with a generalized Levinson theorem <cit.> triplet S phase scattering starts with 360^o and singlet with 180^o up 220^o. In the D wave, there is a single bound state is enabled, which, together with the S wave determines the ground state of the deuteron.Therefore, the availability of knots due to the numerical calculations or used potential model.4. Analytical forms of DWF in the years 1939-1969When describing DWF in the coordinate representation using terms such as “analytical shape (form)”, its “approximation” or “parameterization”. Familiar in the first place, the term “analytical form” is used as the obtained solution of a system of coupled equations. Later in the works is a expression used to refer to records HFD resulting approximation.Analytical forms of deuteron wave function are provided with use according to the designations specified in the quoted literature.The work written by Flugge <cit.>in 1939 was one of the first works on research of a deuteron and its quadruple moment. For calculations such deuteron functions for S- and D- states were used ψ _S = (α b)^3 / 2/√(8π)exp(- 1/2α br), ψ _D = (α b)^7 / 2/√(2176π)r^2exp(- 1/2α br), where a=1.3 and α=1.34.H.A. Bethe<cit.> was one of the first considered the deuteron is a mixture of ^3S_1 and ^3D_1 state. Then the complete wave function is ψ = 1/r[ χ (r)F_10M + φ (r)F_12M],where F_JLM are the angular functions; χ and ϕ are the radial wave functions of the S and D component.The radial deuteron wave functions satisfy the two coupled differential equations [ d^2χ/dr^2 = Aχ - B√(2)φ ,; d^2φ/dr^2 = ( A + B + 6/r^2) φ - B√(2)χ ,; ]where [A = ae^ - r / 3r + ε ^2,; B = ae^ - r( 1/r^3 + 1/r^2 + 1/3r).; ] The potential was be cut off at small distances, therefore we consider the two alternatives:1) Zero cut-off (r < r_0 ) A = B = 0;2) Straight cut-off (r < r_0 ) [ A = A_0 = ae^ - r_0/ 3r_0 + ε ^2;; B = B_0 = ae^ - r_0 ( 1/r_0 ^3 + 1/r_0 ^2 + 1/3r_0 ).; ] The outside solution is pairs [ χ _1 = e^ - ε z + a/6εG(r),; φ _1 = - √(2) a/8ε ^4{[ 2ε ^3 + 3ε - 3/r^2 + (2ε - 1)ε ^2/r]e^ - (1 + ε )r + 3 - 4ε ^2/2εF(r)};; ][ χ _2 = - √(2) a/12{[ 1/r^3 + 1 + ε/r^2 + 2ε - 1/r]e^ - (1 + ε )r + 3 - 4ε ^2/12εG(r)},; φ _2 = ( 1/r^2 + ε/r + ε ^2/3)e^ - ε r + a{H(r)e^ - (1 + ε )r + [ 3/32ε ^5 - 5/16ε ^3 + 1/12ε]F(r)};;] where [G(r) = - e^ - ε rEi(- r) + e^ε rEi(- r(1 + 2ε )),;F(r) = - e^ - ε rEi(- r)( 1/r^2 + ε/r + ε ^2/3) + e^ε rEi(- r(1 + 2ε ))( 1/r^2 - ε/r + ε ^2/3),; H(r) = 1/6r^3 + (- 3/16ε ^4 + 3/16ε ^3 + 3/8ε ^2 - 1/4ε + 1/6 + 1/12ε)1/r^2 + (2ε ^2 + 3)(2ε - 1)/48ε ^2r.;] The inside solution is next pairs [ χ _3 = z - 0.009z^5 - ... + log z(0.02z^5 + ...) + b(0.1667z^3 - ...) + blog z(...) + ...,;φ _3 = 0.01298z^5 + 0.01202z^7 + ... - log z(0.2828z^5 + ...) + b(0.0113z^5 + ...) + ...;;][ - χ _4 = 0.07071z^5 + 0.002405z^7 + ... + b(0.004z^7 + ...),;φ _4 = z^3 + 0.00714z^5 + ... + b(0.0714z^5 + ...) + ...;;] where z = √(B_0 ) r;b = A_0 /B_0= 1/3a + r_0 ε ^2e^r_0 /a( 1/3 + 1/r_0+ 1/r_0^2 ); r_0=0.4fm.W. Rarita and J. Schwinger obtain the following differential equations for the ^3S_1 and ^3D_1 radial deuteron wave functions <cit.> [ d^2u/dr^2 + M/ħ ^2[ E + J]u = - 2^3 / 2γM/ħ ^2Jw,; d^2w/dr^2 - 6w/r^2 + M/ħ ^2[ E + (1 - 2γ )J]u = - 2^3 / 2γM/ħ ^2Ju.; ]Outside the range of interaction these coupled equations are readily integrable. The result of such decisions [ u(r > r_0 ) = Ae^ - α (r - r_0 ),; w(r > r_0 ) = Be^ - α (r - r_0 )( 1 + 3/α r + 3/(α r)^2),; ] where α = √(M| E_0 | / ħ ^2), | E_0 | = - E = 2.17MeV.At distances less than r_0 the differential equations for the ground state wave function will be written in the following form [( d^2/dr^2 + κ ^2)u(r) = - λ ^2w(r),; ( d^2/dr^2 - 6/r^2 + κ '^2)w(r) = - λ ^2u(r).; ] Here introduced the next notation [ κ ^2 = M(V_0 - | E_0 |)/ħ ^2,; κ '^2 = M((1 - 2γ )V_0 - | E_0 |)/ħ ^2,; λ ^2 = 2^3 / 2γ MV_0 /ħ ^2.; ] The procedure adopted was the expansion for deuteron wave functions u(r) and w(r) in infinite power series [ u(r) = ∑_0^∞A_n x^n + 1 + ln x∑_0^∞C_n x^n + 2 ,;w(r) = ∑_0^∞B_n x^n + 3 + ln x∑_0^∞D_n x^n + 3,;x = r / r_0 .;] The constants A_n, B_n, C_n, D_n satisfy the recursion formulas [ (n + 1)(n + 2)A_n + 1 + (2n + 3)C_n + (κ r_0 )^2A_n - 1 = - (λ r_0 )^2B_n - 3 ,; (n + 1)(n + 2)C_n + (κ r_0 )^2C_n - 2 = - (λ r_0 )^2D_n - 3 ,;n(n + 5)B_n + (2n + 5)D_n + (κ 'r_0 )^2B_n - 2 = - (λ r_0 )^2A_n ,;n(n + 5)D_n + (κ 'r_0 )^2D_n - 2 = - (λ r_0 )^2C_n - 1 .; ] The criterion for continuity of the logarithmic derivatives of function u(r) and w(r) gives two simple equations [( r_0 /udu/dr)_r = r_0= - α r_0 ,; ( r_0 /wdw/dr)_r = r_0= - ( 2 + (α r_0 )^2(1 + α r_0 )/(α r_0 )^2 + 3α r_0 + 3),;] which suffice to amply determine B_0/A_0 and V_0 for a given choice of parameters r_0 and γ.The constants A and B may be derived from the known normalization condition: [ ∫_0^∞(u^2 + w^2)dr = ∫_0^∞(u^2 + w^2)dr + A^2/2α + B^2/2α( 1 + 6(1 + α r_0 )^2/(α r_0 )^3) =1.; ;] The final set of constants was calculated as V_0 / | E_0 | = 6.4; γ=0.775; r_0=2.8*10^ - 13cm.Inside the range interaction a general expansion for DWF is <cit.> [ u(r) = ∑_i A_i (κ _i r)^1 / 2J_1 / 2 (κ _i r) = ∑_i u_i (r) ,; w(r) = ∑_j B_j (λ _j r)^1 / 2J_5 / 2 (λ _j r) = ∑_j w_j (r) ,; ]where u_i and w_j are the modes in terms of Bessel functions of order one-half or five-halves. The wave-lengths (κ_i and λ_j) of these modes are determined by the continuity of the logarithmic derivative.Also different set of modes for the radial functions were taken as an exponential times a power series for the interparticle distance: [ u(r) = ∑_i A_i r^iexp ( - λ r) = ∑_i u_i (r) ,; w(r) = ∑_j B_j r^j + 2exp ( - μ r) = ∑_j w_j (r) .;] The parameters λ and μ for radial DWF are practically fixed by minimizing the energy.In Ref.<cit.> it is investigated the radial dependence of the tensor force in the Deuteron. The find the solutions of coupled Schrodinger equations for DWF with methods are similar to the ones used by Rarita and Schwinger. Such ranges and them regions are considered.A. Range of tensor force equal to range of ordinary force: ε=1.For region I r_0 ≥ r ≥ 0 were received solutions [u = ∑_n (A_n + C_n ln x)x^n,; w = ∑_n (B_n + D_n ln x)x^n .; ] For region II ∞≥ r ≥ r_0 solutions is [ u = Aexp{ - α (r - r_0 )} + Cexp{α (r - r_0 )},; u = Bexp{ - α (r - r_0 )}[ 1 + 3/α r + 3/(α r)^2] + Dexp{α (r - r_0 )}[ 1 - 3/α r + 3/(α r)^2].; ] B. Range of tensor force less than range of ordinary force: ε<1.For region I: ε r_0 ≥ r ≥ 0 solutions [u = ∑_n (A_n + C_n ln y)y^n,; w = ∑_n (B_n + D_n ln y)y^n .; ] For region II r_0 ≥ r ≥ε r_0 solutions [ u = A'sin (κ r) + C'cos (κ r),; w = B'[ sin (κ r) + 3/κ rcos (κ r) - 3/(κ r)^2sin (κ r)] +;+ D'[ cos (κ r) - 3/κ rsin (κ r) - 3/(κ r)^2cos (κ r)].;] For region III ∞≥ r ≥ r_0 solutions is the same as (<ref>).C. Range of tensor force greater than range of ordinary force: ε>1.For region I r_0 ≥ r ≥ 0 solutions is the same as (<ref>).For region II ε r_0 ≥ r ≥ r_0 solutions [u = ∑_n (A'_n + C'_n ln y)y^n,; w = ∑_n (B'_n + D'_n ln y)y^n .; ] For region III ∞≥ r ≥ε r_0 solutions [ u = Aexp{ - α (r - ε r_0 )} + Cexp{α (r - ε r_0 )},; u = Bexp{ - α (r - ε r_0 )}[ 1 + 3/α r + 3/(α r)^2] + Dexp{α (r - ε r_0 )}[ 1 - 3/α r + 3/(α r)^2].; ] Pairs of the equations for these areas are specified in work <cit.>. The series coefficients satisfy the recurrence formulas: [ (n + 1)(n + 2)A_n + 2 + (2n + 3)C_n + 2 + aA_n + cB_n = 0,; (n + 1)(n + 2)C_n + 2 + aC_n + cD_n = 0,; (n - 1)(n + 4)B_n + 2 + (2n + 3)D_n + 2 + bB_n + cA_n = 0,; (n - 1)(n + 4)D_n + 2 + bD_n + cC_n = 0.;] Here it is used following abbreviations a = (κ r_0 )^2;a' = (α r_0 )^2;b = (κ 'r_0 )^2;b' = (α 'r_0 )^2;c = (λ r_0 )^2;[ α = √(ME_0 )/ħ;α ' = √(M(E_0 + 2γ V_0 ))/ħ;; κ = √(M(V_0 - E_0 ))/ħ;κ ' = √(M([1 - 2γ ]V_0 - E_0 ))/ħ;;λ = √(2^3 / 2γ MV_0 )/ħ;x = r/r_0 ;y = r/ε r_0 .; ] At the outside of potentials NN interaction u(x) and w(x) have following form <cit.> [u(r) = Nexp ( - r / ξ );; w(r) = N'exp ( - r / ξ )[ 3(ξ / r)^2 + 3(ξ / r) + 1],; ] where constant ξ is determined from the binding energy of deuteron. The coupled equations (<ref>) have two independent solutions, which satisfy the boundary its conditions and are denoted by ψ _1 = (u_1 ,w_1 ), ψ _2 = (u_2 ,w_2 ). Any solution of (<ref>) is given by ψ _1 + αψ _2 = (u_1 + α u_2 ,w_1 + α w_2 ). For core radius r_0 [ u_1 (r_0 ) + α u_2 (r_0 ) = 0,; w_1 (r_0 ) + α w_2 (r_0 ) = 0,;] therefore, r_0 is the zero point of determinant | [ u_1 (x) u_2 (x); w_1 (x) w_2 (x); ]| and a is given by α = - u_1 (r_0 )/u_2 (r_0 ).Static parameters determined α by α =- (bX - B)±√(D)/aX - A,where A, B, C, a, b, c are some integrals quadratic of wave functions a = ∫(u_2^2 + w_2^2 )dr ;b = ∫(u_1 u_2 + w_1 w_2 )dr ;c = ∫(u_1^2 + w_1^2 )dr ;D = X^2(b^2-ac) - X(2bB - Ac - aC) + (B^2 - AC). The assumed potentials confine the physical value X to some limited region. For example, numerical results are given below with V_C=-500MeV; V_T=-500 or 300MeV.In the method for the solution of the deuteron problem and its application to a regular potential were applied such sets trial functions <cit.> {[ u = are^ - μ r,; w = bre^ - μ r,; ].{[ u = ar^2e^ - μ r,; w = br^3e^ - μ r.; ]. or {[u(r) = - 0.822ψ _30 - 0.3965ψ _31 - 0.2289ψ _32 - 0.1172ψ _33 - 0.1729ψ _34 ;; w(r) = 2.25466w_0 + 13.6903ψ _30 - 9.9299ψ _31 + 0.7286ψ _32 + 0.4131ψ _33 + 0.1079ψ _34 ,;]. where w_0 = 1/√(2)ψ _10; ψ _3i are Laguerre functions.A nucleon-nucleon potential which is a well-defined static limit of a phenomenological covariant interaction is suggested in paper <cit.>. For this model have used a variational wave function with the correct behavior at the origin and at infinity:[u(r) = e^ - r - e^ - α r,; w(r) = N[ ( 3/r^2 + 3/r + 1)e^ - r - ( 3/r^2 + 3α/r + 3α ^2 - 1/2 + α r(α ^2 - 1)/2)e^ - α r],;] where α=5 and N=0.1 are the approximate values of the variational parameters.For normalization ∫_0^∞( u^2(r) + w^2(r))dr = 1 of pion-theoretical deuteron function its record will be as analytical expression <cit.> [ u(r) = 1.039exp ^ - 0.32r - 1.392^ - 2.360r;; w(r) = 0.02624{1 + 3/0.328r + 3/(0.328r)^2}exp ^ - 0.328r - 1.298/r^2exp ^ - 0.962r,;] In <cit.> are desirable to approximate the Gartenhaus wave function from the cut-off meson theory <cit.> by an analytic expression. They can be usefully in the various integrals for calculates phenomena involving the deuteron. Three such approximations of varying degrees of accuracy are specified further.Approximation 1. The best Hulthen type wave function defined by the such form u(r) = C( e^ - α r - e^ - β r). Its parameters C and α are agree with the asymptotic behavior of the Gartenhaus S- function, and β find from the normalization of the two functions according to formulas ∫_0^∞u^2dr = 4.025;∫_0^∞w^2dr = 0.29. The received values of these parameters: C=1.85 or 1.91; α=0.232; β=1.202.Approximation 2 and 3. Next even better approximation only as sum of exponential functions has the forms u(r) = {[ C( 1 - e^ - 1.59r)( e^ - 0.232r - e^ - 1.59r),; C( 1 - e^ - 2.5r)( 1 - e^ - 1.59r)( e^ - 0.232r - e^ - 1.90r).;]. A good approximation to the D function using only exponential functions is the following: w(r) = {[ 0.658r^3,r; 2.34r^3e^ - 2r,r; 0.147e^ - 0.256r + 0.810e^ - 0.577r,r∞;]. which agrees with the Gartenhaus function for D- state everywhere within 4 percent.For relativistic DWF (In particular for S- state) the authors <cit.> find as ψ (r) = ∑_0^∞A_q (r)G_0^2q( π/2) , where G_0^2q are Gegenbauer polynomial at argument π/2. The radial DWFs in coordinate and momentum space are Bessel-Fourier transforms to each other: A_q (r) = ( - 1)^q/(2π )^2∫_0^∞A_q (p)J_2q + 1 (pr)/prp^3dp . In work <cit.> are investigated the elastic scattering of high energy neutron by deuteron, using DWFs calculated making use of the meson theoretical potential:1) The DWF with the hard core: u(r) = {[ N{exp ( - α (r - r_C ) - exp ( - β (r - r_C )},r ≤ r_C ;;0,r ≤ r_C .;].Here N^2 = αβ (α + β )/2π (α - β )^2.2) The DWF without the hard core: u(r) = N( exp ( - α r) - exp ( - β r)). For the deuteron state in work <cit.> was considered the pion-theoretical wave function given in <cit.>. Thus ψ = 1/√(4π)[ u(r)/r - 1/√(8)S_12w(r)/r]1/√(2)( ξ _1 η _2 - η _1 ξ _2 )χ _m^l . The plane wave approximation is the conventional form for purpose: ψ _f = 1/√(2){ξ _1 η _2 exp [ikr] - ξ _2 η _1 exp [ - ikr]}χ _m^l . Here k is the relative propagation vector of the nucleons; ξ _i, η _i are the isotopic spin wave functions in a proton and a neutron states; χ _m^l is the triplet spin function.For simplicity of calculation for photodisintegration of the deuteron in the high energy range, are used the following analytical form which approximates the deuteron wave function very well in the outer region <cit.>: [u(r) = A_S [ e^ - α r - e^ - β r],; w(r) = D_1 e^ - α _1 r + D_2 e^ - α _2 r + D_3 e^ - α _3 r.; ] The parameters are chosen as: [ A_S = 1.039;D_1 = 0.111;α _1 = 0.4;; α = 0.328;D_2 = 0.656;α _2 = 1;; β = 1.972;D_3 = - 0.767;α _3 = 2.; ] The wave function and them parameters reproduce result of calculations for the deuteron parameters: P_D=7%, Q=2.6·10^-27cm^2.In paper <cit.> the deuteron wave-functions used are of the Hulthen-Sugawara type <cit.> ψ _D (r) = N/√(4π){u_g (r)/r + S_12/√(8)w_g (r)/r}χ _m , where [ u_g (r) = cosε _g [ 1 - e^ - β (x = x_C )]e^ = x;; w_g (r) = sinε _g [ 1 - e^ - γ (x - x_C )]^2e^ = x[ 1 + 3(1 - e^ - γ x)/x + 3(1 - e^ - γ x)^2/x^2];; ] N^2=7.6579×10^ - 12cm^ - 1; x = α r; x_C = α r_C; α=0.2316fm^ - 1; r_C are hard-core radius. Two values were select for D- probabilities asβ = 7.961; γ = 3.798;sinε _g=0.02666 for 4% D- state;β = 7.451; γ = 4.799;sinε _g=0.02486 for 6% D- state.The numerical deuteron wave function using the Yale nucleon-nucleon potential has been approximated by analytic expressions <cit.>that contained only exponential functions. A first approximation consisted with Hulthen function for S- wave of the form u_1 (r) = Ae^ - α r - Be^ - β r. Value of parameters A and α are determined by the asymptotic behaviour of the radial wave function, B by the boundary conditions at the hard core and β by the required normalization from the S- state. The result for these parameters is A = 1.04965;α = 0.331;B = 2.57955;β = 2.900. An improved approximation to u(x) is obtained with the function u_2 (r) = ( 1 + 1.039e^ - 5r - 8e^ - 10.58r)( 1.0459e^ - 0.331r - 2.5702e^ - 2.9r). Fit the Yale D- state data were received a suitable approximation with a function of the form w(r) = {[ Ae^ - α r - Be^ - β r,0.35 ≤ r ≤ 3.416;;Ce^ - γ r + De^ - δ r,3.416 ≤ r.; ]. The values of the constants are [ A = 0.46354;α = 0.6636;B = 0.24479;β = 5.4183;;C = 0.13436;γ = 0.417;D = 0.85599;δ = 1.1703.;] For the Schrodinger equations for the deuteron radial wave functions are look for a solution of this equation having the following form <cit.> ( [ u(r); w(r);]) = a( [ f_1 (r); g_1 (r); ])exp [ - κ r] + b( [ f_2 (r); g_2 (r); ])exp [ - κ r]. For the deuteron wave function in both S and D states is constructed following Martin's method. He allows to written down the analytical solutions as [ u(r) = Ae^ - κ r[ 1 + ∫_1^2 e^ - α rρ ^ + (α )dα + H∫_1^2 e^ - α rρ ^ - (α )dα],; w(r) = Ae^ - κ r[ H + ∫_1^2 e^ - α rσ ^ + (α )dα + H∫_0^2 e^ - α rσ ^ - (α )dα],;] where A = a + b;H = a - b/a + b;ρ ^± = 1/2( ρ _1 ±ρ _2 );σ ^± = 1/2( σ _1 ±σ _2 ). In a Martin's method it was considered that [f_λ = 1 + ∫_0^∞ρ _λ (α )e^ - α rdα ;; g_λ = η _λ + ∫_0^∞σ _λ (α )e^ - α rdα ;; ] are solutions of modified equations Schrodinger [f_λ ^” - 2κ f_λ ^' - U_C f_λ = U_T g_λ ,; g_λ ^” - 2κ g_λ ^' - (6/r^2 + U_m)g_λ = U_T f_λ .; ] Are considered the “inner” part of the interaction in the wave functions themselves by adding two terms for the two-pion exchange and the repulsive nucleon core. For couple of functions u(r) and w(r) by solutions will be the following form as (it dearly fixes the normalization of the functions): [ u(r) = e^ - κ r[ 1 + ∫_1^2 e^ - α rρ ^ + (α )dα + H∫_1^2 e^ - α rρ ^ - (α )dα + γ _1 e^ - ξ _1 r + γ _2 e^ - ξ _2 r],; w(r) = e^ - κ r[ H + ∫_1^2 e^ - α rσ ^ + (α )dα + H∫_0^2 e^ - α rσ ^ - (α )dα + γ _3 e^ - ξ _1 r + γ _4 e^ - ξ _2 r],; ] where H, γ _i, ξ _i are parameters to be fixed. This representation for DWFs with tensor forces.In paper <cit.> was assumed that the true wave function is a sum of the “outer” part found from the welt-known OPE potential, and an “inner” part. The “outer” part more slowly than “inner” part vanishes exponentially with an exponent between one and two pion masses. Bialkowski <cit.> have proposed the wave function of the form {[ u(r) = u_outer + u_inner ,; w(r) = w_outer + w_inner ,;].[ u_outer = Ae^ - κ r[ 1 + ∫ρ ^ + (α )e^ - α rdr/α (α + 2κ ) + H∫ρ ^ - (α )e^ - α rdr/α (α + 2κ )],; w_outer = Ae^ - κ r[ H + ∫σ ^ + (α )e^ - α rdr/α (α + 2κ ) + H∫σ ^ - (α )e^ - α rdr/α (α + 2κ )],; ]{[ u_inner = Ae^ - κ r[ γ _1 e^ - ξ _1 r + γ _2 e^ - ξ _2 r],; w_inner = Ae^ - κ r[ γ _3 e^ - ξ _1 r + γ _4 e^ - ξ _2 r].;]. Except these forms, are also such forms for “inner” part DWF <cit.> as {[ u_inner = Ae^ - κ r[ γ _1 e^ - ξ _1 r + γ _2 e^ - ξ _2 r],; w_inner = Ae^ - κ r[ γ _3 + γ _4 ]e^ - ξ _2 r;;].{[ u_inner = Ae^ - κ r[ γ _1 e^ - ξ _1 r - γ _2 e^ - ξ _2 r],; w_inner = Ae^ - κ r[ γ _3 e^ - ξ _3 r - γ _4 e^ - ξ _2 r].;]. In the work <cit.> authors have approximated the coordinate space wave functions by a sum of exponentials or Hankel functions. The deuteron S state can then be viewed as an extension of the known Hulthen wave function. The wave functions in coordinate space have the form [ u(r) = N( e^ - α r + ∑_j = 1^n C_j e^ - ε _j r),; w(r) = ρ N( α rh_2 (iα r) + ∑_j - 1^n C_j^ / ε _j^ / rh_2 (iε _j^ / r)),;] where h_2 is the spherical Hankel function xh_2 (ix) = e^ - x[ 1 + 3 / x + 3 / x^2]; α = √(Mε) is given by the deuteron binding energy ε. Fitted pole positions and residues are denoted by ε _j, C_j. Coefficient N is normalization for wave function in terms of the deuteron effective range ρ N^2 = 2α/1 - αρ ( - ε , - ε ). The calculated values of parameters were provided as α=0.2338fm^ - 1; N=0.8896fm^ - 1 / 2; ρ=0.0269.The deuteron wave function may be expanded <cit.> in the complete set of relative oscillator functions ϕ _nl (s=1; j=1; l=0 or 2) ψ = ∑_[ n = 0; l; ]^∞α _nlϕ _nl , where <cit.>: ϕ _nl (r_i ,b) = √(2Γ( n + l + 3/2)/b^3n!)r_i^l exp(- r_i^2 /2b^2)/b^lΓ( l + 3/2)F(- n| l + 3/2|r_i^2 /b^2).5. Analytical forms of DWF in the years 1970-1999 Yamaguchi's separable tensor potential generates a deuteron wave function in momentum space. Fourier transformation produces wave function in coordinate space <cit.> [u(r) = e^ - α r - e^ - β r,; w(r) = η[ ( 1 + 3/α r + 3/α ^2r^2)e^ - α r + .; + . ( (α ^2 - γ ^2)(γ r + 1)/2α ^2 - γ ^2/α ^2 - 3γ/α ^2r - 3/α ^2r^2)e^ - γ r],;] where the asymptotic ratio of D to S wave η = lim_r →∞[ w(r)/u(r)] = α ^2(β ^2 - α ^2)t/(γ ^2 - α ^2)^2 Function w(r) is proportional to r^2: lim_r → 0 w(r) = η (γ ^2 - α ^2)^2/8α ^2r^2. Using function u(r) and w(r) it is possible to find the central potential V_C(r) and the tensor potential V_T(r). For this reason Burnap et all. solve the coupled equations for radial DWF. In the result is written down the local potentials corresponding to Yamaguchi's form factors as V_C =- ħ ^2(β ^2 - α ^2)/M[- wt(γ r + 1)/2u.e^ - γ r.+ 1 - w / √(2)/ue^ - β r]( u - w/√(2) - w^2/u)^ - 1,V_T =- √(8)ħ ^2/M[ (γ ^2 - α ^2)^2/2α ^2η (γ r + 1)e^ - γ r..- w(β ^2 - α ^2)/ue^ - β r]( u - w/√(2) - w^2/u)^ - 1.Parameters β, γ, t are definite in <cit.>, thus α=0.2316fm^ - 1.Humberston and Wallace offered some series of analytic approximations <cit.> to the deuteron wave function for Hamada-Johnston potential. The solution for coupled equations for the radial components DWF must satisfy the boundary conditions [ u(x_0 ) = 0,u(x) ≈ e^ - κ r,; w(x_0 ) = 0,w(x) ≈ e^ - κ r( 1 + 3/κ r + 3/(κ r)^2),;] where x_0=0.343fm is the hard-core radius.Equations for the radial components of the S- and D- state wave functions was then transformed to [ {d^2/dy^2 + 2/yd/dy - κ ^2/y^4 - A(y)}u̅(y) - B(y)w̅(y) = 0,; {d^2/dy^2 + 2/yd/dy - 6/y^2 - κ ^2/y^4 - C(y)}w̅(y) - B(y)u̅(y) = 0,;] where y = 1 / r;u̅(y) = u(r); [A(y) = U_C (r) / y^4;B(y) = 2√(2) U_T (y) / y^4;; C(y) = [ U_C (r) - 2U_T (r) - 3U_LS (r) - 3U_LL (r)] / y^4.; ] Here U_j(r) is components a nucleon-nucleon potential.Forms of analytic approximations to the solution of coupled equations were obtained for the modified and unmodified Hamada-Johnston potentials. It was applied the Rayleigh-Ritz variational method to the deuteron binding energy. The trial function for the deuteron as{[ u(r) = e^ - α r( 1 - e^ - δ (r - r_0 ))∑_i = 1^L c_i e^ - (i - 1)μ r = ∑_i = 1^L c_i ϕ _i,; w(r) = e^ - α r( 1 - e^ - ρ (r - r_0 ))( 1 + 3/α r + 3/α ^2r^2)∑_i = 1^N d_i e^ - (i - 1)μ r = ∑_i = 1^N c_M + iϕ _M + i .;]. Here ħ ^2α ^2 / M = - E_α and δ, ρ, c_i (i=1,…,L), d_i (i=1,…,N) are variational parameters.DWFs for Reid soft-core potential are selected according to <cit.>:(a) particular Haftel-Tabakin cases <cit.>: [u(r) = C_0 e^ - α _0 r(1 - β _0 r),; w(r) = C_2 re^ - α _2 r(1 - β _2 r),;] (b) "fixed-range" cases: g(r) = α (1 - p)p^a[ 1 - bp^c + (b - 1)p^d];0 ≤ r ≤ e, where p = 1 - r / e. Appropriate parameters and properties of the unitary transformations are presented as UT8, 13, 18, 22, 23 for case (a) and UT101, 102, 103 for case (b).The resulting form of the separable potentials <cit.> is υ = ∑_n | υ _n ⟩λ _n ⟨υ _n | ;⟨l,p|. υ _n ⟩ = ∑_m b_n,l;m u_l,m (p) ; where DWF in momentum space [ u(p) = 1/(p^2 + α _m^2 )^2;; w(p) = p^2/(p^2 + α _m^2 )^3.; ] The Fourier transforms of DWF in momentum space are [ u(r) = - √(π/8)exp ( - α _m r)/α _m ,;w(r) = - 1/8√(π/2) rexp ( - α _m r).; ] To determine the unitary pole approximations for a concrete potential model, in <cit.> are done calculations the two-nucleon bound state wave functions in momentum space. The partial wave Schrodinger equation appropriate to S- and D- state it is written down as '26-10muλ( d^2/dr^2 - 6/r^2δ _l2 - k_d^2 )u_l (r) = ∑_L = 0,2V_lL (r)u_L (r) , where k_d^2 = E_d / '26-10muλ; E_d are deuteron binding energy. To solves the coupled equations components of the radial deuteron wave function u_l(r), use expressions [u_l (r) = 0forr < r_c ,; u_l (r) = - √(π/2)'26-10muλ∑_j = 1^N α _l^j ϕ _l^j (r)forr ≥ r_c ,; ;] where r_c are hard-core radius; α _l^j are the expansion coefficients. The effect of the hard core be incorporated by the modification [ ϕ _0^j (r) = exp ( - k_d r) - η _0^j exp ( - a_j r),; ϕ _0^j (r) = 2k_d^2 A_5 / 2 (k_d r) - η _2^j [ 2a_j^2 A_5 / 2 (a_j r) - (k_d^2 - a_j^2 )a_j rA_3 / 2 (a_j r)].;] where [ A_5 / 2 (μ r) = ( 1 + 3/μ r + 3/(μ r)^2)e^ - μ r,; A_3 / 2 (μ r) = ( 1 + 1/μ r)e^ - μ r.; ] Here a_j (j=1,N) are predetermined ranges chosen between 0.7 and 20.0 fm. This approximation was applied for the group potentials of different types: hard core (Reid hard core <cit.>, Hamada- Johnston <cit.>, Yale <cit.>), soft core (Reid soft core and Alternate Reid soft core <cit.>), super soft core (Tourreil-Sprung A, B and C <cit.>) and velocity dependent (Bryan-Scott, Bryan-Gersten, Stagat <cit.>, Riewe, and Green, Ueda-Green II).In work <cit.> is submitted Baker transformation as ũ(r) = √(( dR/dr)) u(R(r)), where R = r + a + 2βln[ 1 + √(1 + ρexp ( - r / β ))/1 + √(1 + S)], a are hard-core radius; S is determined by the asymptotic lim_r →∞[ R(r) - r] - 0. Besides in work <cit.> are specified exotic shapes by DWF UT101 <cit.> two DWFs obtained from RSC wave functions by a unitary transformation designed for lower the D-state probability [ũ(r) = C(r)u(r) + S(r)w(r),; w̃(r) = - S(r)u(r) + C(r)w(r),;] where S(r) = A_trtanh (r / γ )exp ( - (r - ρ ) / τ )/1 + exp ( - (r - ρ ) / τ ),C(r) = √(1 - S^2(r)) . Here parameters chosen are A_tr=0.4472; γ=0.02fm; τ=0.02fm; ρ=0.8 or 1.9fm.Accordant <cit.> the Hulthen wave function for S state DWF u(r) = N( e^ - γ r - e^ - β r),β > > γ , where γ = √(Mε) = 0.2316fm^ - 1; β be determined from the triplet effective range parameter with the value r_0=1.75fm as approximately β = 3 - γ r_0 + √(γ ^2r_0^2 - 10γ r_0 + 9)/2r_0= 5.98γ . The normalization constant N in terms of the effective range as N^2 = 2γ/1 - γ r_0= 0.783. Wave function for D- state choose explicitly as w(r) = η N( 1 - e^ - τ r)^5e^ - γ r( 1 + 3/γ r + 3/γ ^2r^2). Multiplication is considered the asymptotic by an interpolating factor.Formulas for calculation of values of the D-state percentage and for the quadrupole moment will be respectively P_D = η ^2N^2∑_n = 1^4 [ a_n /(n - 1)!∑_q = 0^10( [ 10;q;])( - 1)^n - q(2γ + qτ )^n - 1ln (2γ + qτ )] + η ^2N^2a_0 ∑_q = 0^10( [ 10;q;])( - 1)^q/2γ + qτ ;[ Q = η N^2/√(50)∑_n = 0^5 b_n ∑_q = 0^5 ( [ 5; q; ])( - 1)^qn![ 1/(qr + 2γ )^n + 1 - 1/(qτ + γ + β )^n + 1] -; - η ^2N^2/20{∑_n = 0^2 c_n ∑_q = 0^10( [ 10;q;])( - 1)^qn!/(2γ + qτ )^n + 1 + ∑_n = 0^2 c_n ∑_q = 0^10( [ 10;q;])( - 1)^n - q/(n - 1)!ln (2γ + qτ )/(2γ + qτ )^1 - n},;] where a_n = ( 1,6/γ,15/γ ^2,18/γ ^3,9/γ ^4);b_n = ( 3/γ ^2,3/γ,1);c_n = ( 15/γ ^2,6/γ,1);d_n = ( 18/γ ^3,9/γ ^4). The calculated values of parameters: τ=1.09fm^ - 1; η=0.025 for P=7% and τ=0.83fm^ - 1; η=0.029 for P=4%.In <cit.> DWF modelled on that of the Reid soft-core potential (RSCP) outside 1.5λ _π: ψ _L (r) = {[ ∑_i = 1^8 a_Li r^i - 1 ,r < 1.5λ _π ;;ψ _L(RSCP) (r),r ≥ 1.5λ _π ,; ]. where λ _π is the pion Compton wavelength. In radial wave functions five of the coefficients a_Li are determined by: 1) continuity of DWF together with its first and second derivatives of the RSCP at 1.5λ _π; 2) u(0)=0; w(0)=0; 3) adjusting a D- state percentage (4.5-6.5%) and the overall normalization as 1.In Refs. <cit.> and <cit.> it is considered electron-deuteron tensor polarization and the short range behavior of the deuteron wave function. Interactions for twelve classes varying in the core region obtained using form factor for the unitary transformation g(r) = {[ C(R - r)^α (1 - β r),r ≤ R;;0,r > R.; ]. where R=0.7fm; α=2.1. The constant C is determined by the normalization condition. At a choice α>2 from that the transformed DWF will be continuous and continuous it first and second derivatives at R. Calculations are compared for super soft-core (SSC) potential <cit.>. The tensor polarization for the recoil deuterons in ed scattering are calculated as P_e = 2G_0 G_2 + G_2^2 / √(2)/G_0^2 + G_2^2 . Its values in the range 0.625-0.668.Lomon-Feshbach, Holinde-Machleidt and four-component relativistic models were used for research elastic electron-deuteron scattering at high energy <cit.>.In coordinate space expansion in Hulthen functions of different range is presented as u(r)/r = √(π/2)∑_i c_i exp ( - β _i r)/r . If we calculate the nth moment of the coefficients as M_n = ∑_i c_i β _i^n, then the reduced wave function u(r) will go like r^n at the origin.The normalized solutions of the Schrddinger equation select in <cit.> as [ u(r) = N[ u_1 (r) + η u_2 (r)],; w(r) = N[ w_1 (r) + η w_2 (r)],; ] The experimental values of deuteron observables severely restrict values of η. For placing upper and lower bounds for η it is used Schwarz's inequality U_2 W_2 ≥ X^2 + √(1/2) XW_2 + 1/8W_2^2 . The condition for the existence of a solution is Δ (R,η ) = Y^2 - 4X^2 - √(2) XY ≥ 0, where X = X(R,η ) = √(50) Q - ∫_R^∞r^2(uw - √(1/8) w^2)dr = V_2 - √(1/8) W_2 ;Y = Y(R,η ) = 4⟨r^2⟩ - ∫_R^∞r^2(u^2 +w^2)dr = U_2 + W_2 ;U_n = ∫_0^R r^nu^2dr ;V_n = ∫_0^R r^nuwdr ;W_n = ∫_0^R r^nw^2dr . Valuep_D it is determined with a condition as p_D = ∫_0^∞w^2dr = W_0 + Z;p_D > Z + X^2(1 + sgnX)/2U_4 ; where Z = Z(R,η ) = ∫_R^∞w^2dr.In paper <cit.> were presented DWFs from Yamaguchi type form factors with 4% or 7% deuteron D- state probability (designations YY4 and YY7). Also are obtain a new set T4D-1 (T4D-2) which has the rank-1 (rank-2) separable potential with the first (second) form factor of T4D.It should be noted that the most popular, the quoted and used parametrization of DWF are the analytical forms offered by the Paris group. Known numerical values of radial DWF in coordinate representation for the Paris potential can be approximated by means of convenient decompositions <cit.> in an such form: {[u( r ) = ∑_j = 1^N C_j exp(- m_j r),; w( r ) = ∑_j = 1^N D_j exp(- m_j r)[ 1 + 3/m_j r + 3/( m_j r)^2],; ]. where N=13; m_j = β + (j - 1)m_0; β = √(ME_d ); m_0=0.9fm^ - 1. M is nucleon mass, E_d is binding energy of deuteron. The boundary conditions as r → 0 u( r ) → r,w( r ) → r^3. The asymptotics behavior of the deuteron wave functions for large values of r →∞ are [ u(r) = A_S exp ( - β r),; w(r) = A_D exp ( - β r)[ 1 + 3/β r + 3/(β r)^2],;] The last coefficients of an analytical form were determined by formulas {[ C_n = - ∑_j = 1^n - 1C_j;; D_n - 2 = m_n - 2^2 /( m_n^2 - m_n - 2^2 )( m_n - 1^2 - m_n - 2^2 )[- m_n - 1^2 m_n^2 ∑_j = 1^n - 3D_j /m_j^2+ ( m_n - 1^2 + m_n^2 )∑_j = 1^n - 3D_j - ∑_j = 1^n - 3D_j m_j^2 ]; ]. and taking into account conditions ∑_j = 1^N C_j= 0; ∑_j = 1^N D_j= ∑_j = 1^N D_j m_j^2= ∑_j = 1^N D_j /m_j^2= 0. The accuracy of parametrization is characterized by the values I_S = ( ∫_0^∞[ u(r) - u_aprox (r)]^2dr)^1 / 2,I_D = ( ∫_0^∞[ w(r) - w_aprox (r)]^2dr)^1 / 2. Model radial DWF <cit.> according to parametrization (<ref>) <cit.> are constructed to facilitate the exploration of dependencies on the percentage D state and on the small-, medium-, and large-distance parts of DWF. Parametrization <cit.> was also used for approximation of DWF received for the following potentials: of the (energy-dependent) full model and from the (energy-independent) relativistic momentum space OBEPQ <cit.>, OBEP model A, B, C <cit.>, OBEPR, OBEPR(A) and OBEPR(B) <cit.> an N=11.Theoretical values for the central and tensor components of the polarizability are presented in Ref. <cit.>. The are sums of bilinear combinations of integrals of the form I(J;L) = ∫r^3u_L (r)f_J (K,r)dr , where f_J and u_L is the radial wave function of the P wave continuum and deuteron respectively. The presence of the r^3 factor strongly suggests that the long range area of ground state DWF [u(r) = A_S e^ - γ r/r;; u(r) = η A_S e^ - γ r/r[ 1 + 3/γ r + 3/(γ r)^2]; ] will be of value in determining the I(J;L), and hence the calculated polarizability. Further is investigated the extent to which α and r are in fact determined by A_S and η.DWF <cit.> must belong to the area of the Hilbert space orthogonal to the trivial solution. therefore the orthogonalization is straightforward for the Paris wave function u(r) and w(r) [ ũ(r) = u(r) - CΦ _0 (r)/√(1 - C^2);;w̃(r) = w(r)/√(1 - C^2).; ] Here b is the oscillator width parameters; constant C equal to the product ⟨ u .| Φ _0 ⟩; Φ _0 (r) is the eigenfunction of the norm kernel calculated in oscillator basis Φ _0 (r) = [ 2/π( 3/b^2)^3]^1 / 4rexp{ - 3r^2/4b^2}. The modified DWF takes the form [ ũ(r) = Au(R)1/3sinαΦ _0 (r) - 1/5cosαΦ _2 (r)/1/3sinαΦ _0 (R) - 1/5cosαΦ _2 (R);r ≤ R;;ũ(r) = Au(r);r ≥ R;; w̃(r) = Aw(r),;] where R is certain radius, when for r<R the wave function is determined by six quarks dynamics; Φ _0 (r) and Φ _2 (r) are the oscillator wave functions for the ground state and the level with two excitation quanta. The ratio between them is such Φ _2 (r) = Φ _0 (r)√(3/2)( 1 - r^2/b^2). In paper <cit.> a method has been obtained which determines as whether or not the long-range part for potential model of a two-body is consistent with measured deuteron properties and independent of the short-range behaviour. For the determination outer part of the deuteron wave function was to construct two independent solutions of the coupled Schrodinger equations ( [ u_1; w_1; ]) and ( [ u_2; w_2; ]) in the region r ≥ R. Further are used the asymptotic boundary conditions as ( [ u_1; w_1; ]) →( [e^ = x; η _0 xk_2 (x); ]); ( [ u_2; w_2; ]) →( [0; xk_2 (x);]), where x = α r, xk_2 (x) = e^ = x( 1 + 3 / x + 3 / x^2). The first solution corresponds to η =η _0 and for other solution η take a linear combination: ( [ u; w; ])_η = ( [ u_1; w_1; ]) + (η - η __0)( [ u_2; w_2; ]). In <cit.> is specified fit the electromagnetic form factors of the deuteron on the basis of nonrelativistic wave functions [u(r) = N[ e^ - α r - ∑_i c_C^i e^ - β _S^i r];; w(r) = ρ N[ α rh_2 (iα r) - ∑_i ( β _D^i /α)^2c_D^i β _D^i rh_2 (iβ _D^i r)],; ] where xh_2 (ix) = [ 1 + 3 / x + 3 / x^2]exp ( - x). Asymptotics at r → 0 for the S and D state will be asu(r) = ror ∑_i c_S^i= 1;u(r) = r^3or {[ ∑_i β _S^i c_S^i = α ;; ∑_i ( β _S^i )^2c_S^i = α ^2 ;;].w(r) = r^3or {[∑_i c_D^i= 1;; ∑_i ( β _D^i )^2c_D^i= α ^2;; ∑_i ( β _D^i )^4c_D^i= α ^4.;].For separable potentials with and without tensor force are presented calculation of deuteron form factors <cit.>, which are expressed through radial DWF in configuration space. The expressions Mehdi-Gupta parametrization for the radial DWF are: {[u(r) = A( e^ - α r - e^ - β r) + Bre^ - β r,; w(r) = C[ α ^2/3( e^ - α r - e^ - γ r) - γ (γ ^2 - α ^2)/6re^ - γ r + .;.+ ( 1/r^2 + α/r)e^ - α r - ( 1/r^2 + γ/r + γ ^2 - α ^2/2)e^ - γ r],; ]., where C = 3√(2)π Nt/(γ ^2 - α ^2)^2. Coefficients A and B for shape-1: A = √(2)π N/β ^2 - α ^2;B = 0; and for shape-2: A = √(2)π N/(β ^2 - α ^2)^2;B = - π N/√(2)β (β ^2 - α ^2). The two-body parameters represented as ratios β/α and γ/α. The D- state probability P_D is given by P_D = N^2π ^2t^2(5α + γ )/8γ (1 + γ )^5. The following parameterization of DWF for realistic superdeep local NN- potential (Moscow) was written down as gaussian expansions <cit.> {[ u(r) = r∑_i = 1^N_S a_i exp ( - α _i r^2),; w(r) = r^3∑_i = 1^N_D b_i exp ( - β _i r^2),;]. where [ α _i = α _0 /41.47tg^7 / 2[ π( 2i - 1)/4N_S ],; β _i = β _0 /41.47tg^7 / 2[ π( 2i - 1)/4N_D ],;] α _0 = 31,9;β _0 = 164;N_S=N_D=30.In <cit.> are considered quark compound bag (QCB) and six quark bag models and are inquire into the values of P_OCB and P_6q predicted by the QCB model. For illustration the method first consider a "toy model" of the S- wave deuteron without the NN interaction u(r) = N{[ - γ _1 sh(κ r) + γ _2 sin (β r),r ≤ b,;exp ( - κ r),r ≥ b.;]. Calculated value were P_QCB=0.9%; P_6q=17%. The general expression for the deuteron wave function in the QCB model it will be written down as u_l (r) = N{[ b_1 u_l^(<ref>) (r) + b_2 u_l^(<ref>) (r),r ≤ b;; u_l^ext (r),r ≥ b,;]. where N is the normalization factor, u_l^ext are DWF derived from the external potential, and u_l^(<ref>), u_l^(<ref>) are the two linear independent solutions of Schrodinger equation in the inner region. The constants b_1 and b_2 are defined from the atching condition of the internal and external wave functions at r=b. Thus are established the upper limit on 1.2fm≤ b ≤1.6fm (P_QCB≤1%).In <cit.> are consider a more general case for <cit.> and <cit.> by including additional terms as such follows {[u(r) = A_S (1 - e^ - τ r)e^ - α r∑_i = 0^n C_i exp ( - α _i r),; u(r) = η A_S (1 - e^ - σ r)^5k_2 (α r)∑_i = 0^m D_i exp ( - α _i r),;]. where α=0,2315370 fm^ - 1; τ=5α; σ=1,09 fm^ - 1; η=0,025; k_2 (α r) - terms of the spherical Bessel function: k_2 (α r) = ( 1 + 3/α r + 3/(α r)^2)e^ - α r. Also in <cit.> are calculate the simplest phenomenological realistic deuteron wave function given by <cit.> and <cit.> {[ u(r) = A_S (1 - e^ - τ r)e^ - α r,; u(r) = η A_S (1 - e^ - σ r)^5e^ - α r( 1 + 3/α r + 3/(α r)^2).;]. Values are obtained for the parameters η, τ, σ, when the indicated values of A_S and r_d are used as input.In <cit.> present a quark compound bag (QCB) parameterization in r-space. Details of this parameterization are given in ref. <cit.>. In terms of S- and D- waves (respectively l=0;2) one has u_l (r) = N{[ b_1 u_l^(<ref>) (r) + b_2 u_l^(<ref>) (r),r ≤ b;; b_1 u_l^ext (r),r > b,;]. where N is the normalization factor; u_ext(r) are the DWF derived from the assumed external potential; u_ext(r) may be parameterized in terms of Yukawa functions [ u_0^ (r) = ∑_j = 1^m C_j exp ( - m_j r) ;; u_2^ (r) = ∑_j = 1^n D_j exp ( - m_j r)( 1 + 3/m_j r + 3/(m_j r)^2) .; ] In <cit.> are present the QCB model parameters for b=1.2 and 1.35 fm that were selected as representative solutions.In Ref. <cit.> are used in calculations the DWF in the Hulthen form ϕ _d (r) = √(αβ (α + β )/2π (α - β )^2)e^ - α r - e^ - β r/r, where α=0.23fm^ - 1; β=1.61fm^ - 1. It is necessary for receiving the deuteron formation rate A = 3π ^3/r_0 vτ∫_0^∞r| ϕ _d (r)|^2exp(- r^2/4r_0^2 )erfi(ar)dr ,a = vτ/2r_0 √(r_0^2 + v^2τ ^2);erfi(x) = 2/√(π)∫_0^x e^t^2dt . It is also possible to remember also the parameterization of function received for Moscow NN model <cit.> (N=24) [ u( r ) = r∑_i = 1^N a_i exp(- α _i r^2),; w( r ) = r^3∑_i = 1^N b_i exp(- β _i r^2).;] 6. “Improved” analytical forms of DWFIn some papers the above table of values and coefficients for the parameterization (<ref>) <cit.> and calculated for him DWF. It is about papers <cit.>, <cit.>, <cit.>, where there are, although not insignificant, the obvious knots to DWF near the origin! In addition, there is an obvious failure to comply with mandatory conditions for the summation of the coefficients ∑_j = 1^N C_j= 0; ∑_j = 1^N D_j= 0. In the comparative Table 3 shows the results of the summation of the coefficients of these works.For my numerical calculations the resulting coefficients are shown Table 4. The following is the corresponding Fig. 2 where there are knots for WDF.Table 3. The results of the summation of the coefficientsTable 4. Coefficients D_i for OBEPC and ψ _LS^6B width 135mm Graph2.JPGFig. 2. “Improved” DWF for OBEPC and ψ _LS^6BSo, more accurately calculate the coefficients for the relevant parameterizations of DWF. 7. New analytical forms of DWF In 2000-x years are new analytical forms of deuteron wave function. Except the mentioned parametrization, in literature there is one more analytical form <cit.> for DWFs {[u(r) = ∑_i = 1^N A_i exp ( - a_i r^2),; w(r) = r^2∑_i = 1^N B_i exp ( - b_i r^2).; ]. This parametrization was used <cit.> for Nijmegen potentials groups (NijmI, NijmII, Nijm93 and Reid93). Thus value N=13.For explanation D- state of deuteron and correct asymptotic behavior are received nonrelativistic deuteron wave function <cit.>: [ u(r) = N/√(4π)∑_k = 1^n_u C_k exp ( - α _k r) ,; u(r) = N/√(4π)ρ∑_k = 1^n_w D_k exp ( - β _k r)( 1 + 3/β _k r + 3/(β _k r)^2),; ]N = √(∑_k,j = 1^n_u C_k C_j 1/α _k + α _j+ ρ ^2∑_k,j = 1^n_w D_k D_j 1/β _k + β _j ) , where α _i, β _i, C_i, D_i, N, ρ are the real model parameters; n_u = n_w = 3. The form of asymptotics in the limit r → 0 was assumed as: u(r) → r^2;w(r) → r^3. The set of parameters has to meet conditions ∑_k C_k = 0 ;∑_k C_k α _k = 0 ;∑_k D_k = 0 ;∑_k D_k 1/β _k^2= 0 . In the limit r →∞ the deuteron wave functions must have such known asymptotic form [ u(r) → e^ - α r;; w(r) → e^ - α r( 1 + 3/α r + 3/(α r)^2),;] where α = √(Mε) / ħ = 0.2316fm^ - 1; ε=2.2245MeV is the deuteron binding energy. Then after the application of the condition of equations (<ref>) to the deuteron wave functions in forms (<ref>) leads to the relations for model parameters α _1 = β _1 = α and .The charge and quadrupole deuteron form factors and the structure function are defined by values of parameters of model. By using this wave function has calculated the differential cross section of the elastic deuteron-nucleus scattering <cit.>.The analytical form of DWF and it asymptotics, parameters of which are determined by the power of asymptotic decrease of deuteron form factors, particularly, the prediction of QCD, is obtained as <cit.>: u(r) = 32/5√(π)Γ( 1/4)∑_j C_j m_j ( r/2m_j )^7 / 4K_3/4 (rm_j ) ,[u(r → 0) = r8Γ( 3/4)/5√(π)Γ( 1/4)∑_j C_j /m_j^3 / 2 ;; u(r →∞ ) = r^5 / 432/5Γ( 1/4)2^9 / 4∑_j C_j /m_j^5 / 4 e^ - rm_j ,;] or <cit.>: u(r) = 128/231√(π)Γ( 3/4)∑_j C_j m_j ( r/2m_j )^13 / 4K_9/4 (rm_j ) ,[ u(r → 0) = r10Γ( 1/4)/231√(π)Γ( 3/4)∑_j C_j /m_j^9 / 2 ;; u(r →∞ ) = r^11 / 42^13 / 4/231Γ( 3/4)C_1 /m_1^11 / 4e^ - α r,;] where Γ(x), K_v(x) are Euler and McDonald functions; α=15/4.If in addition to the conditions ∑_j = 1^N C_j= 0 for the S- wave function the condition is imposed <cit.> ∑_j = 1^N C_j m_j^2= 0, then in the vicinity of zero the wave function has the following form u_0 (r) = r + ar^3;u_0^ / /(0) = 0. In Ref. <cit.> research is conducted for pion electromagnetic structure without asymptotic decomposition. It was used the following wave function of Coulomb interaction at small distances and linear confinement u(r) = N_T exp(- ar^3/2 - β r), where α = 2/3√(2aM); β = bM; a and b are parameters of linear and Coulomb parts of potential respectively.The paper <cit.> contains description of spin-dependent observables in elastic proton-deuteron scattering on the basis of a generalized diffraction model. This would have parameterization for DWFs in coordinate space. To parameterize the DWFs under consideration are employed used the sum of Gaussian functions with account the behavior of the wave functions at r=0 [ u(r) = r∑_j = 1^m C_0jexp ( - A_0j r^2) ,; w(r) = r^3∑_j = 1^m C_2jexp ( - A_2j r^2) ,; ] where m=5. The functions fitted on the basis of numerical values for the CD-Bonn and dressed dibaryon model (DBM) functions in the intervals 0-20fm with a step of 0.1fm.In <cit.> are specified results of calculations for the deuteron quadrupole momentum Q by using experimental phase shifts for partial-wave analysis of GWU (George Washington University) <cit.> and Nijmegen <cit.>. Also the deuteron parameters (deuteron quadrupole moment Q, the deuteron asymptotic D/S and the deuteron asymptotic normalization constant A_S) and correlation between them for the group potentials is studied. This dependence is represent in the formQ/ η = a + bA_S^2,where a=3.92464fm^2; b=8.71829fm^3.Influence of the D- state component of DWF <cit.> on the application of the Trojan horse method it was shown in <cit.>.Parametrization formulas in a form according to <cit.> are applied approximations of DWF for potential charge-dependent Bonn (CD-Bonn) <cit.>, model FSS2 with the Coulomb exchange kernel <cit.>, and calculated in three different schemes (isospin basis and particle basis with Coulomb off or Coulomb on) and fss2 baryon-baryon interaction <cit.> atN=11, and also for MT model <cit.> when N_S=16; N_D=12.Parametrization Dubovichenko <cit.> is improved in works <cit.>. Minimization of values χ ^2 is carried out 10^ - 4. Using deuteron wave functions in coordinate and space representations, are designed a component of a tensor of sensitivity polarization of deuterons T_20 <cit.> polarization transmission K_0, tensor analyzing power A_yy and tensor-tensor transmission of polarization K_y <cit.>. The obtained outcomes are compared to the published experimental and theoretical outcomesFor deuteron wave function in configuration representation for potential Argonne v18 are designed numerical coefficients of analytical forms <cit.> {[u(r) = ∑_i = 1^20A_i exp ( - a_i r^3),; w(r) = r^2∑_i = 1^20B_i exp ( - b_i r^3).; ]. The coefficients of the four approximating dependencies for the numerical values of DWFs for four realistic phenomenological potentials Nijmegen group have been numerically calculated. The analytical forms are chosen as the product of the power function r^n for the sum of exponential terms <cit.>: {[ u(r) = r∑_i = 1^N A_i exp ( - a_i r^2),; w(r) = r∑_i = 1^N B_i exp ( - b_i r^2),; ].{[ u(r) = r^2∑_i = 1^N A_i exp ( - a_i r^3),; w(r) = r^2∑_i = 1^N B_i exp ( - b_i r^3).; ]. The behavior of the value χ ^2 depending on the number of expansion terms N_i has been studied. With the account of the minimum values of χ ^2 for these forms we have built DWFs in the coordinate space, which do not contain superfluous knots. The calculated parameters of the deuteron are in good agreement with theoretical and experimental results. For DWFs in coordinate and momentum space it is calculated such polarization characteristics: the tensor polarization <cit.> (values t_20(p), t_21(p), t_22(p)) in the range of 0-7 pulse fm^-1. The value of t_20(p) for potentials Nijmegen group in good agreement with literature results for other potential nucleon-nucleon of models and with experimental data's. The results of the deuteron tensor polarization t_ij(p) give some information about the electromagnetic structure of the deuteron. And when known tensor analyzing power it is possible to calculate the differential cross section of double scattering.To solve the system of associated Schrödinger equations that describe the radial DWF u and w {[ u” - α ^2u = f(r),; w” - ( α ^2 + 6/r^2)w = g(r);]. parameterizations were proposed back in 1955 <cit.>: {[ f(r) = ∑_n = 0^∞c_n ψ _1n (r) ,; g(r) = ∑_n = 0^∞d_n ψ _1n (r) .; ]. They can be generalized for the DWF approximation as such analytical forms through Laguerre functions <cit.>: {[ u(r) = ∑_n = 0^11A_n ψ _3n (r),; w(r) = ∑_n = 0^11B_n ψ _3n (r),; ]. where ψ _3n (r) - Laguerre functions (n=0,1,2,3,…): ψ _3n (r) = 2α√(2α)/n!√((n + 1)(n + 2))exp (α r)/rd^n/dr^n( r^n + 2exp ( - 2α r)),ψ _30 = √(α)exp ( - α r)( 2α r),ψ _31 = 2√(α/3)exp ( - α r)( 3α r - 2α ^2r^2),ψ _32 = 2√(2α/3)exp ( - α r)( 3α r - 4α ^2r^2 + α ^3r^3),ψ _33 = 2√(10α)exp ( - α r)( α r - 2α ^2r^2 + α ^3r^3 - 2/15α ^4r^4),ψ _34 = √(5α/3)exp ( - α r)( 6α r - 16α ^2r^2 + 12α ^3r^3 - 16/5α ^4r^4 + 4/15α ^5r^5),ψ _35 = 2√(7α/3)exp ( - α r)( 3α r - 10α ^2r^2 + 10α ^3r^3 - 4α ^4r^4 + 2/3α ^5r^5 - 4/105α ^6r^6). The coefficients of analytical forms through Laguerre functions for the deuteron wave function in coordinate space for NijmI, NijmII, Nijm93, Reid93 and Argonne v18 potentials have been numerically calculated in <cit.>. Near the beginning of coordinates there are some small oscillations for DWFs, but despite of it designed static parameters well coincide with original values.Parameterizations <cit.> and <cit.> can be generalized for the DWF approximation as such analytical forms: {[ u(r) = r^A∑_i = 1^N A_i exp ( - a_i r^3),; w(r) = r^B∑_i = 1^N B_i exp ( - b_i r^3).; ]. Given N=11, search for an index of function of a degree r^n has been carried out, appearing as a factor before the sums of exponential terms of the analytical form (<ref>). Best values appeared to be n=1.47 and n=1.01 for u(r) and w(r) accordingly. Hence, the factors before the sums in (<ref>) can be chosen as r^3 / 2 and r^1 <cit.>: {[ u(r) = r^3 / 2∑_i = 1^N A_i exp ( - a_i r^3),; w(r) = r∑_i = 1^N B_i exp ( - b_i r^3).; ]. Despite cumbersome and time-consuming calculations and minimizations of χ^2 (to the value smaller than 10^-7), it was necessary to approximate numerical values of DWF, the arrays of numbers of which made up 8394 values in an interval r=0-25 fm for potentials NijmI, NijmII, Nijm93 and Reid93 <cit.>, and 15002 values in an intervalr=0-15 fm for potential Argonne v18 <cit.>.The accuracy of parametrization (<ref>) is characterized by: χ ^2 = 1/n - p∑_i = 1^N ( y_i - f(x_i ;a_1 ,a_2 ,...,a_p ))^2 , where n - the number of points of the array y_i of the numerical values of DWF in the coordinate space; f - approximating function of u (or w) according to the formulas (<ref>); a_1,a_2,…,a_p - parameters; p - the number of parameters (coefficients in the sums of formulas (<ref>)). Hence, χ^2 is determined not only by the shape of the approximating function f, but also by the number of the selected parameters.The approximation can be made on the whole interval, or divided into a few distinct sites: around the origin in the maximum and descending function. But this complicates further generalization for the form of the wave function.Coefficients and DWFS (<ref>) for NijmI, NijmII, Nijm93, Reid93 and Argonne v18 potentials it is resulted in works <cit.>. A detailed comparison of the obtained values of t_20(p) (the scattering angle θ=70^0) for these potentials with the up-to-date experimental data of JLAB t20 <cit.> and BLAST <cit.> collaborations. There is a good agreement is for the momentas p=1-4 fm ^ - 1.If we consider normalization ∫(u^2 + w^2)dr = 1 for DWFs (<ref>), we can write this condition using the corresponding coefficients as ∑_i = 1^N ( 2^2 / 3Γ[ 4/3]A_i^2 /12a_i^4 / 3 + B_i^2 /6b_i ) = 1. In this paper it has been used parameterization (<ref>) and it is made minimization of quantity of the designed coefficients. Dependence χ^2 from the number of expansion terms N is resulted in Tables 5 and 6 separately for functions u(r) and w(r). At increase for value N reduction of size χ^2 for u(r) (potential Reid93) is precisely shown in Fig. 3. The coefficients of new analytical forms for DWF in coordinate space for NijmI, NijmII, Nijm93, Reid93 and Argonne v18 potentials have been numerically calculated (Tables 7-11). The obtained wave functions (Fig. 4 and 5) do not contain any superfluous knots.Based on the known DWFs (<ref>) and them coefficients (Tables 7-11) one can calculate the deuteron properties (Table 12): deuteron radius r_m, the quadrupole moment Q_d,the D- state probability P_D and the magnetic moment μ_d. They are in good agreement with the theoretical (Table 1) and experimental (Table 2) data.width 135mm Graph3.JPGFig. 3. χ^2 for u(r) (potential Reid93)width 135mm Graph4.JPGFig. 4. Deuteron wave function u(r)width 135mm Graph5.JPGFig. 5. Deuteron wave function w(r) Table 5. Values χ^2 for u(r) Table 6. Values χ^2 for w(r)Table 7. Coefficients A_i, a_i, B_i, b_i (NijmI)Table 8. Coefficients A_i, a_i, B_i, b_i (NijmII) Table 9. Coefficients A_i, a_i, B_i, b_i (Nijm93)Table 10. Coefficients A_i, a_i, B_i, b_i (Reid93)Table 11. Coefficients A_i, a_i, B_i, b_i (Av18)Table 12. Deuteron properties8. Conclusions Static properties of the deuteron (E_d , r_m, Q_d , P_D , η , A_S), obtained by DWFs for potential models, have been chronologically systematized. The presence or absence of knots near the origin of coordinates for the radial DWF have been shown. The forms, methods of obtaining and asymptotic behaviors of analytic forms for DWFs in the coordinate space have been analyzed.Parameterization in the form of (<ref>) has been used and the number of expansion coefficients has been minimized. Dependence of χ^2 on the number of expansion terms N parameterization (<ref>) is shown separately for the functions u(r) and w(r). The optimum is N=7-10. The resulting wave functions do not contain any extra knots. Calculations have been done for realistic phenomenological potentials NijmI, NijmII, Nijm93, Reid93 and Argonne v18. What is more, analytical forms of DWF by such authors as ertov, Mathelitsch, Moravcsik and Machleidt have been "improved".The resulting DWFs for the group of potential models can be applied to calculate polarization characteristics of the deuteron (tensor polarization t20, sensitivity tensor component to polarization of deuterons T_20, polarization transmission K_0 and tensor analyzing power A_yy, etc. <cit.>). The results will allow studying the deuteron electromagnetic structure, its form-factors and differential cross section of double scattering in more detail in future. 9Buck1979 W.W. Buck, F. Gross, Phys. Rev. D 20, 2361 (1979).Machleidt2001 R. Machleidt, Phys. Rev. C. 63, 024001 (2001).Reid1968 Jr.R.V. Reid, Ann. Phys. (NY) 50, 411 (1968).Kukulin1998 V.I. Kukulin, V.N. Pomerantsev, A. Faessler et al., Phys. Rev. C 57, 535 (1998).Arriola2007 E.R. Arriola, M.P. Valderrama, Eur. Phys. J. A 31, 549 (2007).Haysak1I. Haysak and V. Zhaba, Visnyk Lviv Univ. Ser. Phys. 44, 8 (2009).Haysak2 I.I. Haysak and V.I. Zhaba, Uzhhorod Univ. Scien. Herald. Ser. Phys. 36, 100 (2014).Bokhinyuk V.S. Bokhinyuk, V.I. Zhaba, O.M. Parlag, Uzhhorod Univ. Scien. Herald. Ser. Phys. 31, 111 (2012).Fujiwara2001 Y. Fujiwara, T. Fujita, M. Kohno et al., Phys. Rev. C 65, 014002 (2001).Garcon2001 M. Garcon, J.W. van Orden, Advanc. Nucl. Phys. 26, 293 (2001).Veerasamy2011 S. Veerasamy and W.N. Polyzou, Phys. Rev. C 84, 034003 (2011).Fukukawa2015 K. Fukukawa, M. Baldo, G.F. Burgio et al., Phys. Rev. C 92, 065802 (2015).Loiseau1987 B. Loiseau, L. Mathelitsch, W. Plessas, Nuovo Cimento A 97, 77 (1987).Gross2010 F. Gross, A. Stadler, Phys. Rev C 82, 034004 (2010).stoks1994 V.G.J. Stoks, R.A.M. Klomp, C.P.F. Terheggen, J.J. de Swart, Phys. Rev. C 49, 2950 (1994).Swart1995 J.J. de Swart, R.A.M.M. Klomp, M.C.M. Rentmeester, Th.A. Rijken, Few-Body Syst. Suppl. 8, 438 (1995).Wiringa1995 R.B. Wiringa, V.G.J. Stoks, R. Schiavilla, Phys. Rev. C 51, 38 (1995).Lacombe1980 M. Lacombe, B. Loiseau, J.M. Richard et al., Phys. Rev. C 21, 861 (1980).Epelbaum2005 E. Epelbaum, W. Glockle, U.-G. Meiβner, Nucl. Phys. A 747, 362 (2005).downum2010 C. Downum, J.R. Stone, T. Barnes et al., AIP Conf. Proc. 1257, 538 (2010).Ladygin1997 V.P. Ladygin and N.B. Ladygina, J. Phys. G: Nucl. Part. Phys. 23, 847 (1997).Blatt1958 J.M. Blatt, V.F. Weisskopf, Theoretical nuclear physics (Wiley, New York, 1958).Brown1979 G.E. Brown, A.D. Jackson, The nucleon-nucleon interaction (North-Holland, Amsterdam, 1976).naghdi2014 M. Naghdi, Phys. Part. Nucl. Lett. 11, 410 (2014).Bethe1940 H.A. Bethe, Phys. Rev. 57, 390 (1940).Rarita1941 W. Rarita, J. Schwinger, Phys. Rev. 59, 436 (1941).Matsumoto1954 M. Matsumoto, W. Watari, Prog. Theor. Phys. 12, 503 (1954).Cap1955 F. Cap, W. Gröbner, Nuovo Cimento 1, 1211 (1955).Gartenhaus1955 S. Gartenhaus, Phys. Rev. 100, 900 (1955).Iwadare1956 J. Iwadare, S. Otsuki, R. Tamagaki, W. Watari, Prog. Theor. Phys. 16, 455 (1956).noyes1956 H.P. Noyes, S.P. Pandya, Phys. Rev. 102, 269 (1956).Moravcsik1958 M.J. Moravcsik, Nucl. Phys. 7, 113 (1958).laurikainen1959 K.V. Laurikainen, O. Varho, Nucl. Phys. 12, 606 (1959).Hamada1960 T. Hamada, Prog. Theor. Phys. 24, 126 (1960).Matsumoto1960 M. Matsumoto, Prog. Theor. Phys. 23, 597 (1960).Hamada1961 T. Hamada, Prog. Theor. Phys. 25, 247 (1961).Hamada1962 T. Hamada, I.D. Johnston, Nucl. Phys. 34, 382 (1962).Bialkowski1963 G. Bialkowski, Nuovo Cimento 29, 201 (1963).Kottler1964 H. Kottler, K.L. Kowalski, Nucl. Phys. 53, 334 (1964).Tabakin1964 F. Tabakin, Ann. Phys. (NY) 30, 51 (1964).McGee1966 I.J. McGee, Phys. Rev. 151, 772 (1966).Eikemeier1966 H. Eikemeier, H.H. Hackenbroich, Z. Physik 195, 412 (1966).elliott1968 J.P. Elliott, A.D. Jackson, Nucl. Phys. A 121, 279 (1968).nestor1968 C.W. Nestor Jr, K.T.R. Davies, S.J. Krieger, M. Baranger, Nucl. Phys. A 113, 14 (1968).erkelenz1969 K. Erkelenz, K. Holinde, K. Bleuler, Nucl. Phys. A 139, 308 (1969).Humberston1970 J.W. Humberston, J.B.G. Wallace, Nucl. Phys. A 141, 362 (1970).stagat1971 R.W. Stagat, F. Rieme, A.E.S. Green, Phys. Rev. C 3, 552 (1971).Holinde1972 K. Holinde, K. Erkelenz, R. Alzetta, Nucl. Phys. A 194, 161 (1972).tourreil1973 R. de Tourreil, D.W.L. Sprung, Nucl. Phys. A 201, 193 (1973).Vary1973 J.P. Vary, Phys. Rev. C 7, 521 (1973).arenhovel1974 H. Arenhovel, H.G. Miller, Z. Physik 266, 13 (1974).fabian1974 W. Fabian, H. Arenhovel, H.G. Miller, Z. Physik 271, 93 (1974).pieper1974 S.C. Pieper, Phys. Rev. C 9, 883 (1974).afnan1975 I.R. Afnan, J.M. Read, Phys. Rev. C 12, 293 (1975).Coester1975 F. Coester, A. Ostebeet, Phys. Rev. C 11, 1836 (1975).Holinde1975 K. Holinde, R. Machleidt, Nucl. Phys. A 247, 495 (1975).jackson1975 A.D. Jackson, D.O. Riska, B. Verwest, Nucl. Phys. A 249, 397 (1975).nagels1975 M.M. Nagels, T.A. Rijken, J.J. de Swart, Phys. Rev. D 12, 744 (1975).Obinata1975 T. Obinata, M. Wada, Prog. Theor. Phys. 53, 732 (1975).tourreil1975 R. de Tourreil, B. Rouben, D.W.L. Sprung, Nucl. Phys. A 242, 445 (1975).Holinde19761 K. Holinde, R. Machleidt, Nucl. Phys. A 256, 479 (1976).Holinde19762 K. Holinde, R. Machleidt, Nucl. Phys. A 256, 497 (1976).weber1976 H.J. Weber, Nucl. Phys. A 264, 365 (1976).Weiss1976 J. Weiss, Czech. J. Phys. B 26, 603 (1976).Adler1977 R.J. Adler, T.K. Das, A.F. Filho, Phys. Rev. C 16, 1231 (1977).mcgurk1977 N.J. McGurk, H. Fiedeldey, Nucl. Phys. A 281, 310 (1977).Allen1978 L.J. Allen, H. Fiedeldey, Few Body Syst. Nucl. 82, 57 (1978).Mathelitsch1978 L. Mathelitsch, H. F. K. Zingl, Nuovo Cimento A 44, 81 (1978).nagels1978 M.M. Nagels, T.A. Rijken, J.J. de Swart, Phys. Rev. D 17, 768 (1978).allen1979 L.J. Allen, H. Fiedeldey, Phys. Rev. C 19, 641 (1979).nagels1979 M.M. Nagels, T.A. Rijken, J.J. de Swart, Phys. Rev. D 20, 1633 (1979).Arnold1980 R.G. Arnold, C.E. Carlson, F. Gross, Phys. Rev. C 21, 1426 (1980).lamot1980 G.H. Lamot, N. Giraud, C. Fayard, Nuovo Cimento A 57, 445 (1980).Koike1981 Y. Koike, Y. Taniguchi, M. Sawada, J. Sanada, Prog. Theor. Phys. 66, 1899 (1981).lagaris1981 I.E. Lagaris, V.R. Pandharipande, Nucl. Phys. A 359, 331 (1981).haidenbauer1984 J. Haidenbauer, W. Plessas, Phys. Rev. C 30, 1822 (1984).kukulin1984 V.I. Kukulin, V.N. Pomerantsev, V.M. Krasnopol'sky, P.B. Sazonov, Phys. Lett. B 135, 20 (1984,).Mehdi1984 S.S. Mehdi, V.K. Gupta, Pramana 22, 497 (1984).Wiringa1984 R.B. Wiringa, R.A. Smith, T.L. Ainsworth, Phys. Rev. C 29, 1207 (1984).Krasnopolsky1985 V.M. Krasnopol'sky, V.I. Kukulin, V.N. Pomerantsev, P.B. Sazonov, Phys. Lett. B 165, 7 (1985).haidenbauer1986 J. Haidenbauer, Y. Koike, W. Plessas, Phys. Rev. C 33, 439 (1986).Kalashnikova1986 Yu.S. Kalashnikova, I.M. Narodetskii, A.I. Veselov, Z. Phys. A 323, 205 (1986).kermode1986 M.W. Kermode, S.G. Cooper, S. Klarsfeld, Phys. Lett. B 174, 357 (1986).beyer1987 M. Beyer, H.J. Weber, Phys. Rev. C 35, 14 (1987).Certov1987 A. Certov, L. Mathelitsch, M. J. Moravcsik, Phys. Rev. C 36, 2040 (1987).ito1987 H. Ito, A. Faessler, Nucl. Phys. A 470, 626 (1987).Machleidt1987 R. Machleidt, K. Holinde, Ch. Elster, Phys. Rep. 149, 1 (1987).Righi1987 S. Righi, M. Rosa-Clot, Z. Phys. A 326, 163 (1987).mustafa1988 M.M. Mustafa, E.S. Zahran, Phys. Rev. C 38, 2416 (1988).oteo1988 J.A. Oteo, Can. J. Phys. 66, 478 (1988).Machleidt1989 R. Machleidt, Adv. Nucl. Phys. 19, 189 (1989).dijk1989 H. Dijk, B.L.G. Bakker, Nucl. Phys. A 494, 438 (1989).buchmann1990 A. Buchmann, Y. Yamauchi, A. Faessler, Prog. Part. Nucl. Phys. 24, 333 (1990).grach1990 I.L. Grach, Yu.S. Kalashnikova, I.M. Narodetskii, J. Phys. G 16, 63 (1990).Minelli1991 T.A. Minelli, A. Pascolini, C. Villi, Nuovo Cimento A 104, 1589 (1991).haidenbauer1992 J. Haidenbauer, K. Holinde, M. B. Johnson, Phys. Rev. C 45, 2055 (1992).kukulin1992 V.I. Kukulin, V.N. Pomemntsev, Prog. Theor. Phys. 88, 159 (1992).mustafa1993 M.M. Mustafa, Phys. Rev. C 47, 473 (1993).haidenbauer1993 J. Haidenbauer, K. Holinde, M.B. Johnson, Phys. Rev. C 48, 2190 (1993).Sprung1994 D.W.L. Sprung, W. van Dijk, E. Wang et al., Phys. Rev. C 49, 2942 (1994).Kohlhoff1994 H. Kohlhoff, H.V. von Geramb, Quant. Invers. Theor. Applic. 427, 314 (1994).valcarce1994 A. Valcarce, A. Buchmann, F. Fernandez, A. Faessler, Phys. Rev. C 50, 2246 (1994).Araujo1995 C.F. de Araujo Jr., S.K. Adhikari, L. Tomio, J. Comput. Phys. 118, 200 (1995).Levchuk1995 M.I. Levchuk, Few-Body Syst. 19, 77 (1995).Doleschall1996 P. Doleschall, Nucl. Phys. A 602, 60 (1996).forest1996 J.L. Forest,V.R. Pandharipande, S.C. Pieper et al., Phys. Rev. C 54, 646 (1996).Fujiwara1996 Y. Fujiwara, C. Nakamoto, Y. Suzuki, Phys. Rev. C 54, 2180 (1996).Ordonez1996 C. Ordóñez, L. Ray, U. van Kolck, Phys. Rev. C 53, 2086 (1996).Jade1998 L. Jade, Phys. Rev. C 58, 96 (1998).Gridnev1999 K.A. Gridnev, V.B. Soubbotin, V.B. Stepukov et al., Eur. Phys. J. A 6, 21 (1999).epelbaum2000 E. Epelbaum, W. Glockle, Ulf-G. Meiβner, Nucl. Phys. A 671, 295 (2000).Dubovichenko20001 S.B. Dubovichenko, I. I. Strakovsky, Phys. Atom. Nucl. 63, 582 (2000).Zaitsev2001 S.A. Zaitsev, E.I. Kramar, J. Phys. G 27, 2037 (2001).Entem2001 D.R. Entem, R. Machleidt, Proceedings of the 7th International Spring Seminar on Nuclear Physics, Maiori, Italy, 2001, p. 113.Amghar2003 A. Amghar, B. Desplanques, Nucl. Phys. A 714, 502 (2003).Kaskulov2003 M.M. Kaskulov, P. Grabmayr, Intern. Jour. Mod. Phys. E 12, 449 (2003).Entem2003 D.R. Entem, R. Machleidt, Phys. Rev. C 68, 041001 (2003).Dubovichenko2004 S.B. Dubovichenko, Properties of light atomic nucleus in potential cluster model (Daneker, Almaty, 2004).Shirokov2004 A.M. Shirokov, A.I. Mazur, S.A. Zaytsev et al., Phys. Rev. C 70, 044005 (2004).Valderrama2005 M.P. Valderrama, E.R. Arriola, Phys. Rev C 72, 054002 (2005).Berezhnoy2005 Yu.A. Berezhnoy, V.Yu. Korda, A.G. Gakh, Intern. Jour. Mod. Phys. E 14, 1073 (2005).Knyr2006 V.A. Knyr, V.G. Neudatchin, N.A. Khokhlov, Phys. Atom. Nucl. 69, 2034 (2006).Khokhlov2007 N.A. Khokhlov, V.A. Knyr, V.G. Neudatchin, Phys. Rev. C 75, 064001 (2007).Krutov2007 A.F. Krutov, V.E. Troitsky, Phys. Rev. C 76, 017001 (2007).Mazur2007 A.I. Mazur, A.M. Shirokov, J.P. Vary et al., Bull. Russ. Academ. Scien.: Physics 71, 754 (2007).Higa2008 R. Higa, M.P. Valderrama, E.R. Arriola, Phys. Rev. C 77, 034003 (2008).Yang2009 C.-J. Yang, Ch. Elster, D.R. Phillips, PoS CD09:064 (2009).Valderrama2008 M.P. Valderrama, A. Nogga, E.R. Arriola, D.R. Phillips, Eur. Phys. J. A 36, 315 (2008).Valderrama2009 M.P. Valderrama, E.R. Arriola, Phys. Rev. C 79, 044001 (2009).Babenko2011 V.A. Babenko, N.M. Petrov, Phys. Atom. Nucl. 74, 352 (2011).Shojaei2011 M.R. Shojaei, A.A. Rajabi, T. Karimi, Appl. Phys. Research 3, 122 (2011).Lamia2012 L. Lamia, M. La Cognata, C. Spitaleri et al., Phys. Rev. C 85, 025805 (2012).Perez2013 R.N. Perez, J.E. Amaro, E.R. Arriola, Phys. Rev. C 88, 024002 (2013).Betan2014 R.M.Id Betan, Phys. Lett. B 730, 18 (2014).Perez20141 R.N. Perez, J.E. Amaro, E.R. Arriola, Phys. Rev. C 89, 024004 (2014).Perez20142 R.N. Perez, J.E. Amaro, E.R. Arriola, Phys. Rev. C 89, 064006 (2014).rezaei2014 B. Rezaei, A. Dashtimoghadam, Jour. Theor. Appl. Phys. 8, 203 (2014).Epelbaum2015 E. Epelbaum, H. Krebs, U.-G. Meiβner, Eur. Phys. J. A 51, 53 (2015).piarulli2015 M. Piarulli, L. Girlanda, R. Schiavilla et al., Phys. Rev. C 91, 024003, (2015).Takigawa2017 N. Takigawa, K. Washiyama.Fundamentals of Nuclear Physics (Springer Japan, Tokyo, 2017).Courant1953 R. Courant, D. Hilbert, Methods of Mathematical Physics (Interscience, New York, 1953).Neudatchin1974 V.G. Neudatchin, Y.F. Smirnov, Modern problems of optics and atomic physics (Kiev. State. Univ., Kiev, 1974).Kukulin1979 V.I. Kukulin, V.G. Neudatchin, Y.F. Smirnov, PEPAN 10, 1236 (1979).flugge1939 S. Flügge, Z. Phys. 113, 587 (1939).Rarita1948 W. Rarita, Phys. Rev. 74, 1799(1948).guindon1948 W.G. Guindon, Phys. Rev. 74, 145 (1948).gourdin1959 M. Gourdin, J.T.T. Van, Nuovo Cimento 14, 1051 (1959).Sakamoto1959 Y. Sakamoto, T. Sasakawa, Prog. Theor. Phys. 21, 879 (1959).donnachie1962 A. Donnachie, Nucl. Phys. 32, 637 (1962).Hulthen1957 L. Hulthen, M. Sugawara, In Handbook der Physik (Springer-Verlag, Berlin, 1957).bialkowski1964 G. Bialkowski, Nuovo Cimento 32, 1809 (1964).elliott19680 J.P. Elliott, A.D. Jackson, H.A. Mavromatis et al., Nucl. Phys. A 121, 241 (1968).Burnap1970 C. Burnap, J.S. Levinger, B. Siebert, Phys. Lett. B 33, 337 (1970).Haftel1971 M.I. Haftel, F. Tabakin, Phys. Rev. C 3, 921 (1971).Yale1962 K.E. Lassila, M.H. Hull, H.M. Ruppel et al., Phys. Rev. 26, 881 (1962).klarsfeld1981 S. Klarsfeld, J. Martorell, D.W.L. Sprung, Nucl. Phys. A 352, 113 (1981).Lacombe1981 M. Lacombe, B. Loiseau, J.M. Richard et al., Phys. Lett. B 101, 139 (1981).Lopes1983 M.H. Lopes, J.A. Tostevin, R.C. Johnson, Phys. Rev. C 28, 1179 (1983).deloff1984 A. Deloff, Z. Phys. 316, 49 (1984).klarsfeld1984 S. Klarsfeld, J. Martorell, D.W.L. Sprung, J. Phys. G 10, 165 (1984).locher1984 P. Locher, A. Svarc, Z. Phys. A 316, 55 (1984).Mrowczynski1992 S. Mrowczynski, Phys. Lett. B 277, 43 (1992).Dubovichenko20002 S.B. Dubovichenko, Phys. Atom. Nucl. 63, 734 (2000).Krutov2004 A.F. Krutov, V.E. Troitsky, N.A. Tsirova, Theor. Phys. 5, 17 (2004).Krutov2006 A.F. Krutov, V.E. Troitsky, N.A. Tsirova, Vestnik SamGU 3, 100 (2006).Krutov2008 A. Krutov, V. Troitsky, N. Tserova, PoS LC054 (2008).Gamzova2009 E.S. Gamzova, A.F. Krutov, V.E. Troitsky, N.A. Tsirova, Theor. Phys. 10, 32 (2009).platonova2010 M.N. Platonova, V.I. Kukulin, Phys. Atom. Nucl. 73, 86 (2010).Arndt2000 R. A. Arndt, I. I. Strakovsky, R. L. Workman, Phys. Rev. C 62, 034005 (2000).Stoks1993 V.G.J. Stoks, R.A.M. Klomp, M.C.M. Rentmeester, J.J. de Swart, Phys. Rev. C 48, 792 (1993).Zhaba1 V.I. Zhaba, Ukr. J. Phys. 61, 949 (2016).Zhaba2 V.I. Zhaba, Probl. Atom. Sci Tech. 3, 154 (2016).Zhaba3 V.I. Zhaba, Kharkov. Univ. Bull., Phys. Ser. 23, 36 (2015).Zhaba4 V.I. Zhaba, Visnyk Lviv Univ., Ser. Phys. 51, 77 (2016).Karmanov1981 V.A. Karmanov, Yad. Fiz. 34, 1020 (1981).Ladygin2002 V.P. Ladygin, N.B. Ladygina, Yad. Fiz. 65, 188 (2002).Zhaba5 V.I. Zhaba, Prykarpat. visnyk NTSh, Number Ser. 1, 139 (2016).Zhaba6 V.I. Zhaba, J. Phys. Stud. 20, 3101 (2016).Garson1994 M. Garson, J. Arvieux, D.H. Beck et al., Phys. Rev. C 49, 2516 (1994).Zhaba7 V.I. Zhaba, Electr. Journ. Theor. Phys. 13, 161 (2016).Zhaba8 V.I. Zhaba, Cherkasy Univ. Bull., Phys. and Mathem. Scienc. 349, 50 (2015).Zhaba9 V.I. Zhaba, Nucl. Phys. Atom. Energy 17, 22 (2016).Zhaba10 V.I. Zhaba, Mod. Phys. Lett. A 31, 1650139 (2016).Abbott2000a D. Abbott et al., Phys. Rev. Lett. 84, 5053 (2000).Abbott2000b D. Abbott et al., Eur. Phys. J. A 7, 421 (2000).Zhang2011 C. Zhang, M. Kohl, T. Akdogan et al., Phys. Rev. Lett. 107, 252501 (2011).
http://arxiv.org/abs/1706.08306v1
{ "authors": [ "V. I. Zhaba" ], "categories": [ "nucl-th" ], "primary_category": "nucl-th", "published": "20170626101345", "title": "Deuteron: properties and analytical forms of wave function in coordinate space" }
< g r a p h i c s > The electronic structure and energetic stability of A_2BX_6 halide compounds with the cubic and tetragonal variants of the perovskite-derived K_2PtCl_6 prototype structure are investigated computationally within the frameworks of density-functional-theory (DFT) and hybrid (HSE06) functionals. The HSE06 calculations are undertaken for seven known A_2BX_6 compounds with A = K, Rb and Cs, and B = Sn, Pd, Pt, Te, and X = I. Trends in band gaps and energetic stability are identified, which are explored further employing DFT calculations over a larger range of chemistries, characterized by A = K, Rb, Cs, B = Si, Ge, Sn, Pb, Ni, Pd, Pt, Se and Te and X = Cl, Br, I. For the systems investigated in this work, the band gap increases from iodide to bromide to chloride. Further, variations in the A site cation influences the band gap as well as the preferred degree of tetragonal distortion. Smaller A site cations such as K and Rb favor tetragonal structural distortions, resulting in a slightly larger band gap. For variations in the B site in the (Ni, Pd, Pt) group and the (Se, Te) group, the band gap increases with increasing cation size. However, no observed chemical trend with respect to cation size for band gap was found for the (Si, Sn, Ge, Pb) group. The findings in this work provide guidelines for the design of halide A_2BX_6 compounds for potential photovoltaic applications. § INTRODUCTIONSince the initial discovery of lead halide perovskite compounds as solar absorbers in photovoltaic devices<cit.>, the power conversion efficiencies (PCEs) achieved with these materials has increased steadily and currently reaches 22.1%<cit.>. These high PCEs have motivated significant efforts aimed ultimately at the commercial application of lead-based halide perovskites for solar power conversion.For such applications, two issues that continue to receive considerable attention are the toxicity of lead, which can be leached out of APbX_3 compounds due to their aqueous solubility <cit.>, and the poor chemical stability in air <cit.>.The first of these two issues has motivated research into the use of alternative Sn/Ge-based perovskite compounds, although limited PCEs of 6% or less have been demonstrated with these materials to date <cit.>. Further, like their Pb-based counterparts, Sn/Ge-based perovskite compounds also have been found to suffer from poor chemical stability <cit.>.The inorganic compound Cs_2SnI_6 has received recent attention as an alternative to Sn-based halide-perovskites for photovoltaic device applications.In comparison with Sn and Pb based halide perovskites, Cs_2SnI_6 has been shown to feature enhanced stability in ambient environments<cit.>, correlating with the presence of a more stable higher oxidation state for Sn in this compound (formally 4+ in Cs_2SnI_6 compared with 2+ in CsSnI_3). The crystal structure of Cs_2SnI_6 can be described as a defect variant of perovskite, with half of the Sn atoms removed, as illustrated in Fig. <ref>.In this figure, the SnX_6 octahedra in the Cs_2SnI_6 compound can be seen to be isolated, in contrast to the corner-sharing arrangement characterizing the perovskite structure. Associated with this structural change, the Sn-I bond lengths are shorter in the Cs_2SnI_6 compound, which has been correlated with its enhanced chemical stability <cit.>. In addition to its desirable stability, the electronic and optical properties of Cs_2SnI_6 also have been demonstrated to be attractive for photovoltaic device applications. Specifically, Cs_2SnI_6 is a direct-gap semiconductor that displays strong visible light absorption <cit.>.The measured band gaps reported range from 1.6 eV<cit.> to 1.48 eV<cit.> to 1.3 eV<cit.>. The valence band and conduction band are well dispersed, with dominant characters based on I-p and hybrid I-p/Sn-s orbitals, respectively<cit.>. A high electron mobility of 310 cm^2/V*s and a sizable hole mobility 42 cm^2/V*s were reported in bulk compounds <cit.>. The Cs_2SnI_6 compound has been reported to display intrinsic n-type conductivity<cit.> (with carrier concentrations of 10^14 cm^-3 and 5*10^16 cm^-3), and it has been shown that it can be doped p-type with SnI_2<cit.> (with carrier concentrations of 10^14 cm^-3), demonstrating the ambipolar nature of this material <cit.>.As an initial attempt, PCEs of about 1% in air have been recently demonstrated in photovoltaic devices employing this material as a photoabsorber <cit.>.As a means for further optimizing the properties of Cs_2SnI_6 for device applications, substitutional alloying is expected to be a useful strategy, as the A_2BX_6 structure can be readily doped with different impurity ions, especially in the 6-fold coordinated tetravalent cation site<cit.>. Studies investigating the effect of chemical substitutions have been limited to date, but they have clearly demonstrated the possibility of tuning electronic properties. For example, in a study comparing hole-transport properties in Cs_2SnI_6, Cs_2SnBr_6 and Cs_2SnCl_6 systematic variations in the optical and transport properties were observed to be correlated with the substitution of halogen anions having different size and electronegativity <cit.>. In another study the alloying of Cs_2SnI_6 with Cs_2TeI_6, to form Cs_2(Sn_1-xTe_x)I_6 solid solutions, was found to lead to reduced mobility and reduced defect tolerance <cit.>. While these studies have not yet resulted in improved materials for device applications, they have clearly demonstrated the importance of the B-X chemistry in defining optical and transport properties.To guide further studies aimed at the use of chemical substitutions to optimize properties for photovoltaic applications, in this work we employ density-functional-theory (DFT) based computational methods to explore trends underlying the variation of electronic structure and structural stability of A_2BX_6 compounds with chemical composition.We begin by employing hybrid-functional (HSE06) methods to calculate and analyze the electronic structures of seven known inorganic compounds with varying A(=K, Rb, Cs) and B(=Pt, Pd, Sn, and Te) site cations, for the case of X=I.We use these results as benchmarks to demonstrate the ability of computationally more efficient semi-local DFT functionals(Perdew-Burke-Ernzerhof parametrization of the generalized gradient approximation, or GGA-PBE) to capture chemical trends. Using the GGA-PBE method we extend the study to consider a wider range of chemistries on the A (A= K, Rb, Cs), B (B= Si, Ge, Sn, Pb, Ni, Pd, Pt, Se, Te) and X sites (X = Cl, Br, I). The results yield the following chemical trends. For variations in the halide anion from Cl to Br to I, the band gap and effective mass are found to decrease, while variations in the A site cation from Cs to Rb to K, lead to an increase in tetragonal distortion and an associated increase in band gap and effective mass. For variations in the B site in the group of Si, Ge, Sn, Pb, there is no observed chemical trend with respect to cation size for band gap, with larger values for Si and Sn and smaller values for Ge and Pb. For variations in the B site in the (Ni, Pd, Pt) group the band gap increases with increasing cation size. For variations in the B site in the group of Se and Te, the band gap increases with increasing size.§ APPROACH §.§ Structures and Chemistries Considered In this section we provide a brief description of the chemical compositions of the A_2BX_6 inorganic compounds considered in the present computational studies.We focus initially on compounds with the halide anion X=I and consider seven compounds with varying alkali A-site cations and tetravalent B-site cations:K_2PtI_6, Rb_2PtI_6, Cs_2PtI_6, Cs_2PdI_6, Rb_2SnI_6, Cs_2SnI_6 and Cs_2TeI_6.Each of these compounds has been synthesized and characterized experimentally, with crystal structure parameters tabulated in the International Crystal Structure Database(ICSD)<cit.>. All of these compounds have the cubic (space group Fm3̅m) structure illustrated in Fig. <ref> (b), with the exception of K_2PtI_6, which is tetragonally distorted (space group P4/mnc). From the DFT-GGA results available through the Materials Project <cit.> these chemistries are expected to yield a range of band gap values that is relevant for photovoltaic applications. For each of these seven compounds we have undertaken computational studies of the electronic structure for both cubic and tetragonal polymorphs, employing the Heyd-Scuseria-Ernerhof HSE06 <cit.> hybrid functional, as detailed below. The calculated hybrid-functional results are compared with available experimental data, and are used to demonstrate the ability of semi-local DFT calculations, based on the Perdew-Burke-Ernzerhof <cit.> Generalized-Gradient Approximation (PBE-GGA), to capture the main chemical trends. With PBE-GGA DFT methods we expand the range of chemistries considered, presenting results in what follows for A_2BX_6 compounds with A=K, Rb, Cs, B=Si, Ge, Sn, Pb, Ni, Pd, Pt, Se and Te, X=Cl, Br, I. To motivate the choice of crystal structures considered in this work, we show in Fig. <ref> a structure map similar to those used in studies of perovskite-based compounds<cit.>. The axes in this figure correspond to the octahedral factor and a radius ratio defined below. The octahedral factor is defined as the ratio between B cation radius and X anion radius. The radius ratio is the ratio between the radius of the A site cation and the size of the cavity formed by the neighboring halogen anions<cit.>. For perovskite compounds the octahedral factor is used to empirically predict the formation of the BX_6 octahedron; and the tolerance factor is used to empirically predict the formation and distortion of the perovksite structure. Likewise, in the A_2BX_6 perovskite-derived structure, we can combine the octahedral factor and radius ratio to predict the formation and distortion of the structure. Small octahedral factors suggest that the formation of BX_6 octahedra are disfavored. A small radius ratio results in distortion of the cavity and a lower symmetry of the structure, or even totally different connectivity of the octahedra network. According to the survey of known A_2BX_6 compounds in the ICSD database<cit.>, shown in Fig.<ref>, most known compounds are cubic (Fm3̅m) and are indicated by red circles. Because of this, in this work, most of the trends are derived from considerations of cubic (Fm3̅m) structures. On the other hand, as shown in Fig.<ref>, compounds with smaller radius ratio and smaller octahedral factor tend to form non-cubic structures, as indicated by blue diamond symbols. Among these non-cubic structure compounds, most adopt the tetragonal P4/mnc structure. In this work, the effect of tetragonal distortions and octahedral rotations are investigated for select compositions by comparing results for the cubic structure with those computed for the tetragonal P4/mnc structure. As shown in Fig. <ref>, the two structures differ by the rotations of octahedra in the a-b plane. §.§ Computational MethodsCalculations were carried out employing spin-polarized HSE06 and PBE-GGA based DFT methods using the Projector Augmented Wave (PAW) method <cit.>, asimplemented in the Vienna Ab initio simulation package (VASP)<cit.>. The PAW potentials used in the calculations are the same as those underlying the data provided in Materials Project<cit.>, in order to facilitate comparisons of the results available through this database. The energy cutoff for the plane wave basis for all compounds was set to 520 eV. For cubic structures, self-consistent calculations were carried out with a gamma-centered k-point mesh of 6×6×6. Based on convergence tests for Cs_2SnI_6, Rb_2TeI_6 and K_2PtI_6, this choice of plane-wave cutoff and k-point density is found to be sufficient to provide total energies converged to within 1 meV/atom, lattice constants within 0.01 Å, and band gaps within 1 meV. For tetragonal structures, self-consistent calculations were carried out with a gamma-centered k-point mesh of 4×4×4, to maintain a similar k-point density as used in cubic structures. For relaxation and density of state calculations, the tetrahedron method with Blöchl corrections was used for k-space integration. To check the importance of relativistic effects on band gaps, spin-orbit coupling is included using the standard approach in VASP for three representative compounds with heavier elements. The self-consistency iterations were performed until the energy was converged to within 1× 10^-5 eV. The structural relaxations were undertaken until the forces were converged within 0.01 eV/Åfor the GGA-PBE calculations, and within 0.05 eV/Åfor HSE06. For the compounds containing B-site transition-metal cations (B=Ni, Pd, and Pt) with unfilled d shells in the 4+ charge state, we ran spin-polarized calculations considering both high-spin and low-spin configurations, finding the low-spin (zero local moment) states to be lowest in energy. The average effective masses<cit.> were calculated using the BoltzTrap code<cit.> and the pymatgen package<cit.>. Gamma-centered 20×20×20 and 18×18×12 k-point meshes were used for GGA-PBE band structure calculations of cubic and tetragonal structures, respectively. The band structures were then used as input to BoltzTrap code to calculate the conductivity tensor σ(T,μ)/τ, with τ the constant relaxation time, T set to room temperature (300K), and μ the Fermi level. Then averaged effective masses were calculated as 𝐦=ne^2τ/σ. As mentioned by Hautier et al<cit.>, the effective mass defined in this way is an average of 𝐦(i,𝐤) around the Fermi level μ, with i the index of band, 𝐤 the wave vector. For electron effective mass, μ was set at the conduction band minimum; for hole effective mass, μ was set at the valence band maximum. Convergence of the effective mass values with respect to the density of k-points over which the band structure was sampled was carefully tested and the values were found to be converged to within 1% using a 20×20×20 mesh.§ RESULTS AND DISCUSSION§.§ Calculated Results for Experimentally Reported A_2BI_6 Compounds Table <ref> lists calculated results for the lattice constants of the seven reported A_2BI_6 compounds, along with experimentally measured values at room temperature. The calculated and measured results agree reasonably well with the GGA-PBE values larger than measurements by as much as 3.3%, those from HSE06 being in slightly better agreement with deviations of up to 2.5%. Table <ref> shows that no obvious difference of B-X bond length is found between cubic and tetragonal structures. The last column of Table <ref> lists the octahedral rotation angle α as illustrated in Fig. <ref>. Among the seven compounds, K_2PtI_6 has the largest α, consistent with the fact that it is the only compound that is experimentally observed to form in the tetragonal structure at room temperature.Band gaps and effective masses of both cubic and tetragonal structures calculated by HSE06 are listed in Tables <ref> and <ref>, respectively. In these tables, results for the structures that are reported to be stable at room temperature are indicated in bold font. The HSE06 calculated band gaps listed in Table <ref> span the range of 0.8 to 2.2 eV. For Cs_2SnI_6, Cs_2PtI_6 and Cs_2TeI_6, the effects of spin-orbit coupling (SOC) on the calculated bandgaps were computed and found to lead to at most a reduction of 0.2 eV.For the other compounds, involving lighter ions, these effects are expected to be smaller. The calculated band gap of Cs_2SnI_6 including spin-orbit coupling is 1.011 eV, which is consistent with the value of 0.97 eV reported in the literature using the same computational approach<cit.>. As pointed out by Maughan et al<cit.>, the calculated band gap is the fundamental band gap, across which the transition is dipole forbidden in Cs_2SnI_6. This explains why measured band gap is significant larger than calculated values. The calculated band gap value for Cs_2TeI_6 is 0.4 eV larger than measured values, which is acceptable given the 0.3 eV mean absolute error of band gaps reported for HSE06 for a semiconductor test set<cit.>. A HSE06+SOC calculation reported<cit.> for Cs_2TeI_6 gives an indirect band gap of 1.83 eV. The 0.15 eV discrepancy with our calculated results listed in Table <ref> may reflect the smaller cutoff energy and k-point density used in the previous calculations.The present calculated results display clear trends with the size of the A-site cation. Considering first the trends for the cubic structures, the band gap values decrease systematically as the A site varies from Cs to Rb to K. By contrast, the trend is the opposite in the tegragonal structure. For a given chemistry, the band gap increases in going from the cubic to the tetragonal structure, and the magnitude of this increase is larger for the compounds with the smaller A-site cation.These trends will be discussed further below based on the bonding in these compounds. From Fig.<ref> to Fig.<ref> we present calculated HSE06 bandstructures all obtained neglecting SOC effects for each of the seven compounds listed in Tables <ref>. Each separate figure gives bandstructures for a set of compounds with a fixed B-site cation. Beginning with Sn-based compounds, Cs_2SnI_6 and Rb_2SnI_6 have both been synthesized and their structures characterized experimentally<cit.>.Considering first the electronic structures of Cs_2SnI_6 and Rb_2SnI_6 in Fig. <ref>, both compounds are calculated to be direct gap semiconductors with valence bands derived primarily from I p states, and conduction bands derived from hybridized Sn s and I p states. The direct nature of band gap in Cs_2SnI_6 is consistent both with previous ab initio calculations<cit.> as well as experiments<cit.>. We consider next the B=Pd and Pt systems Cs_2PdI_6, Cs_2PtI_6, Rb_2PtI_6, and K_2PtI_6.The latter compound forms in the tetragonal structure while all of the others are cubic<cit.>. The calculated bandstructures in Fig. <ref> and Fig. <ref> show that these compounds are indirect gap semiconductors with valence bands derived primarily from I p states, and conduction bands derived from hybridized Pd/Pt d and I p states. Note that their direct gaps at X points are only slightly larger than the fundamental gaps.We consider finally the Cs_2TeI_6 compound, which is experimentally observed to form in the cubic structure<cit.>, with measured bandgaps reported ranging from 1.5 eV<cit.> to 1.59 eV<cit.>. The measured bandgap values are smaller than the HSE06 calculated values(including SOC) by ∼ 0.4 eV. The calculated bandstructure in Fig. <ref> shows that this compound is an indirect gap semiconductor with valence bands derived primarily from I p states, and conduction bands derived from hybridized Te p and I p states. These results are consistent with the results reported by Maughan<cit.>.We consider next the results for effective masses listed in Table <ref>. In all seven compounds, holes are found to be heavier than electrons. This result can be understood based on the fact that the valence bands are derived from unhybridized halogen p orbitals, which are less dispersed compared with the conduction bands that are derived from anti-bonding states between halogen anions and B site cations. The trend obtained here is contrary to the case in halide perovskites, where holes are lighter than electrons<cit.>. The trends in the effective mass values listed in Table <ref> are correlated with those for the band gaps.Specifically, in cubic structures, for a given B-site cation, reducing the size of the A-site cation leads to a decrease in both the effective mass and the band gap, consistent with the expectations of k· p theory<cit.>. In tetragonal structures, for a given B-site cation, reducing the size of the A-site cation leads to an increase in both electron effective mass and the band gap. The variations of hole effective masses are more complex because of the presence of multiple valence bands at the band edge. §.§ Trends in band gaps and stability across broader compositional ranges The HSE06 results in the previous section display clear trends in the electronic structure as the compositions of A_2BX_6 are varied with X=I.In this section we investigate these trends over a broader range of 81 total compositions, considering A=(K, Rb, Cs), B=(Si, Ge, Sn, Pb, Ni, Pd, Pt, Se, Te), and X=(Cl, Br, I), using the computationally efficient GGA-PBE method. As shown in Fig. <ref>, which compares results for band gaps obtained from GGA-PBE and HSE06 for the compounds considered in the previous section, the former reproduces the trends from the latter method quite well, even though the semi-local GGA-PBE functional systematically underestimates the band-gap values as expected.The emphasis in the presentation of results in this section is specifically on compositional trends rather than absolute values for the bandgaps. In addition, we present results related to the relative structural stability of the cubic and tetragonal phases of the compounds.We consider first the geometrical factors of octahedral factor and radius ratio introduced above. Figure <ref> plots these values for each of the 81 compositions considered, and the results provide insights into why only a small subset of these compositions have been observed to form A_2BX_6 compounds experimentally.For example, it can been seen from Fig. <ref>(a) that B=(Si, Ge, Ni and Se) cations lead to small octahedral factors.For these B-site cations only K_2SeBr_6, Rb_2SeCl_6, Cs_2SeCl_6 and Cs_2GeCl_6 have been reported to form experimentally according to the ICSD <cit.>; no compounds with X=I have been reported, consistent with the smaller octahedral factors characterizing these compounds.The results in Fig. <ref>(b) show that the radius ratio decreases systematically for A cations changing from Cs to Rb to K, and for X site cations varying from Cl to Br to I.These results suggest that compounds with A=K and/or X=I are expected to display distorted (non-cubic) phases, which is consistent with the reported non-cubic crystal structures for K_2PtI_6, Rb_2TeI_6, K_2TeI_6 and K_2TeBr_6 in the ICSD <cit.>. This argument is also supported by calculating the energy difference between cubic and tetragonal structures for the 81 compounds, as displayed in Fig. <ref>. The energy of tetragonal K_2BI_6 is significantly lower than the energy of cubic K_2BI_6, indicating the greater tendency to form tetragonal structures for K_2BI_6 compounds. Calculated band gaps obtained with the GGA-PBE functional for the 81 compounds in ideal cubic structures are plotted in Fig.<ref>. The compounds on the 'blue' side have smaller band gaps.Based on a comparison of these GGA-PBE results with the more accurate HSE06 calculations presented in the previous section these materials may have band gaps in the right range for sunlight absorber calculations. Similarly, the compounds such as the chloride shown in red in Fig. <ref> are expected to be large gap semiconductors/insulators. General chemical trends are apparent in the calculated band gaps with increasing size of A site cation and halogen anion, as demonstrated more explicitly in Fig. <ref>, which includes results for both cubic and tetragonal structures. To reduce the complexity of this figure, only Sn, Pt and Te compounds are shown, however the trends in Fig. <ref> for compounds with other B-site cations are similar. The calculated band gaps are shown in Fig. <ref> (b)-(d) to decrease with increasing size of the halide anion, i.e., from Cl to Br to I.This trend occurs for both cubic and tetragonal structures, and across all of the B and A site combinations presented.This trend can be understood based on the electronic states at the conduction and valence bands. Taking B=Sn compounds as an example, the conduction band is composed of anti-bonding states between Sn s orbitals and halide p orbitals.With increasing size of the halide anion the Sn-X bonds increase systematically, correlating with a decrease in the splitting between bonding and anti-bonding splitting and a concomitant lowering of the CBM.The VBM is composed of states that are primarily derived from the halide anion p orbitals, and these states are expected to rise in energy with the decreasing electronegativity going from Cl to Br to I.The trend in decreasing band gap with increase size of the halide anion can thus be understood as resulting from a combination of a lower CBM and higher VBM. The bonding picture presented above is consistent with that discussed by Xiao et al<cit.>.The effects of variations of the A-site cation on the calculated band gaps are illustrated in Fig. 11 (a) for the case of X=I.The effects are seen to be weaker than those resulting from variations in the halide anion, and they are shown to be qualitatively different for cubic and tetragonal phases.Specifically, the calculated band gaps are found to increase and decrease for cubic and tetragonal structures, respectively, with increasing A-site cation size.For the cubic phases the results can be understood as arising from the effect of the A-site cation on the distance between neighboring I sites.As the size of the A-site cation increases, the distance between neighboring I sites increases, while the B-X bond lengths in the BI_6 octahedra remain largely unchanged.Increasing I neighbor distances correlate with a narrowing of the I-p band and thus a lowering of the VBM, consistent with the increase in band gap from K to Rb to Cs.We consider next the trends with A-site cation for the tetragonal structures.For the smaller A-site cations (K and Rb) the tetragonal structure is lower in energy and the band gap larger than that for the corresponding cubic phase.With decreasing size of the A site cation, the degree of octahedral rotation (c.f., Fig. 2) increases, which leads to a decrease in the bonding strength between neighboring halide-ions as their p orbitals increasingly point away from each other.The larger band gap for the smaller A-site cation in the tetragonal phase thus correlates with a resulting narrowing of the valence band.§ SUMMARYFirst-principles calculations employing the hybrid HSE06 method have been undertaken to compute the electronic structures of seven perovskite-derived A_2BI_6 compounds, considering B=Sn, Pd, Pt and Te cations. Calculated band gaps and effective masses of cubic structures decrease as the A site cation size decreases. These trends were explored over a broader range of A_2BX_6 halide chemistries, considering in total 81 combinations of A(=K, Rb, Cs), B=(Si, Ge, Sn, Pb, Ni, Pd, Pt, Se, Te) and X(=Cl, Br and I) ions, employing semi-local GGA-PBE calculations.The results show that the trend of increasing band gaps with decreasing size of the halide anion holds across the compounds, for both cubic and tetragonal structures.The effect of A-site cations is more complex.Within the cubic structure, decreasing size of the A-site cation leads to a decrease in the calculated band gap, while also favoring structural distortion associated with the rotation of the BX_6 octahedra in the tetragonal phase, which has the effect of increasing the calculated band gap.The trends identified in this computational study provide guidelines for the use of substitutional alloying as a means of tuning band gaps and structural stability for use of A_2BX_6 compounds in applications such as solar photo-absorbers.For example, alloying of Cs_2BX_6 with Rb_2BX_6 could be expected to give rise to increasing rotation of the BX_6 octahedra and an increase in the band gap.Similarly, as already demonstrated by McMeekin et al<cit.>, alloying of both A and X sites can be expected to be effective in tuning both bandgap and structural stability.We note that alloying of B site Sn and Te cations has also been explored in this context <cit.>.The general trends in band gaps and structural stability identified in this computational study are anticipated to be helpful in guiding further work in these directions. This work was funded by National Research Foundation (NRF), Singapore (CRP NRF2014NRF-CRP002-036) and the Singapore-Berkeley Research Initiative for Sustainable Energy (SinBeRISE) CREATE programme. This work made use of computational resources provided under the Extreme Science and Engineering Discovery Environment (XSEDE),which is supported by the National Science Foundationgrant No. OCI-1053575.
http://arxiv.org/abs/1706.08674v1
{ "authors": [ "Yao Cai", "Wei Xie", "Hong Ding", "Yan Chen", "Thirumal Krishnamoorthy", "Lydia H. Wong", "Nripan Mathews", "Subodh G. Mhaisalkar", "Matthew Sherburne", "Mark Asta" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170627051702", "title": "Computational Study of Halide Perovskite-Derived A$_2$BX$_6$ Inorganic Compounds: Chemical Trends in Electronic Structure and Structural Stability" }
Resultants and Singularities of Parametric Curves Angel Blasco and Sonia Pérez–DíazDepartamento de Física y Matemáticas Universidad de Alcalá 28871-Alcalá de Henares, Madrid, [email protected], [email protected] ======================================================================================================================================================================================================theoTheorem[section] lemmaLemma[section] propProposition[section] corCorollary[section] defiDefinition[section] remRemark equationsection This work concerns the Ornstein-Uhlenbeck type processassociated toa positive self-similar Markov process (X(t))_t≥ 0 which drifts to ∞, namely U(t)^-tX(^t-1).We point out that U is always a (topologically) recurrent Markov process and identify its invariant measure in terms of the law of the exponential functional Î∫_0^∞exp(ξ̂_s) s, where ξ̂ is the dual of the real-valued Lévy process ξ related to X by the Lamperti transformation.This invariant measure is infinite (i.e. U is null-recurrent) if and only if ξ_1∉L^1(). In that case, we determine the family of Lévy processes ξ for whichU fulfills the conclusions of the Darling-Kac theorem. Our approach relies crucially on a remarkable connection due to Patie <cit.> withanother generalized Ornstein-Uhlenbeck process that can be associated to the Lévy process ξ, and properties of time-substitutions based on additive functionals.Keywords:Ornstein-Uhlenbeck type process, Stationarity, Self-similar Markov process, Lévy process, Exponential functional, Darling-Kac theorem.AMS subject classifications:60G10, 60G18, 60G51, 37A10 § INTRODUCTIONLet (ξ_t)_t≥ 0 be a real-valued Lévy process which drifts to ∞, that islim_t→∞ξ_t=∞ a.s. The so-called exponential functionalI(t)∫_0^texp(ξ_s) s defines a random bijection I: _+→_+, and we denoteits inverse by τ. A well-known transformation due to Lamperti <cit.>, X(t)exp(ξ_τ(t)), yields aMarkov process (X(t))_t≥ 0 on (0,∞) that enjoys the scaling property (with index 1), in the sense that for every x>0, (xX(t/x))_t≥ 0 is a version of X started from x. Conversely, any Markov process X on (0,∞) that fulfills the scaling property (with index 1) and drifts to ∞can be constructed in this way. We refer to the survey by Pardo and Rivero <cit.> and references therein for a detailed presentation of the topic.The question ofthe existence ofa truly self-similarversion (X̃_t)_t≥ 0,that is,X̃ is a Markov process with the same transition probabilities as Xandfurther there is the identity in distribution(cX̃(t/c))_t≥ 0 (d)= (X̃(t))_t≥ 0 for everyc>0,isequivalent to the question of whether0+ is an entrance boundary for the Markov process X. Thiswas raised by Lamperti, and settled in the present setting[This question makes also sense when ξ oscillates, that islim sup_t→∞ξ_t=∞ and lim inf_t→∞ξ_t=-∞ a.s. It was proved in <cit.> and <cit.> that the answer is positive if and only if the so-called ascending ladder height of ξ has a finite expectation.] in<cit.>: the answer is positive if and only if ξ_1∈ L^1() (recall that then (ξ_1)>0, since the test of Chung and Fuchs ensures that in dimension 1, centered Lévy processes are recurrent and therefore oscillate), and further the stationary law can then be expressed in terms of the exponential functional Î of the dual Lévy process.On the other hand, there is another well-known transformation à la Ornstein-Uhlenbeck, also due to Lamperti <cit.>, that yields a bijection between self-similar processes and stationary processes. In the present setting, assuming again that ξ_1∈ L^1() and writing X̃ for the self-similar version of X,Ũ(t)^-tX̃(^t),t∈ is a stationary process on (0,∞). Furthermore, the scaling property ensures that U(t)^-tX(^t-1),t≥ 0 is Markovian,and the Markov processes U and Ũ have the same semigroup.The initial motivation for this work is to analyze thesituation when ξ_1∉L^1().We shall show that the Ornstein-Uhlenbeck type process Ustill possesses a stationary version Ũ, but now under an infinite measure which is absolutely continuous with respect to . More precisely,U is a (null) recurrent Markov process and its invariant measure ν can be expressed similarly as in the positive recurrent case in terms of the dual exponential functional Î. When (ξ_1)=∞, the claim that U is recurrent might look surprising at first sight, since the Lévy process may grow faster than any given polynomial (think for instance of stable subordinators). One could expect that the same might hold for X(t)=exp(ξ_τ(t)), which would then impedethe recurrence of U. However the time-substitution by τ has a slowing down effect when X gets larger, and actually Xonly grows linearly fast.Our main result is related to the celebrated Darling-Kac theorem, which can be thought of as a version of Birkhoff's ergodic theorem in infinite invariant measure; see e.g. Theorem 3.6.4 in <cit.> and Theorem 8.11.3 in <cit.>. We show that if a:(0,∞)→ (0,∞) is regularly varying at ∞ with index α∈(0,1),then for every nonnegative f∈ L^1(ν),a(t)^-1∫_0^t f(U(s))s converges in distribution as t→∞ towards a Mittag-Leffler distribution with parameter α if and only if b(t)^-1ξ_t converges in distribution as t→∞ to a positive stable random variable with exponent α, where b denotes an asymptotic inverse of a.At the heart of our approach lies the fact that one can associate to the Lévy process ξ another generalized Ornstein-Uhlenbeck process, namely V(t)exp(-ξ_t) (I(t)+V(0)),t≥ 0.Lindner and Maller <cit.> have shown that, since ξ drifts to ∞,Valways possesses a stationary version Ṽ, no matter whether ξ_1 is integrable or not. Patie <cit.> pointed at a remarkable connection between U and Vvia a simple timesubstitution, and this provides a powerful tool for the analysis ofU.The rest of this paper is organized as follows. We start in Section 2 by providing background on the generalized Ornstein-Uhlenbeck process V. Then, in Section 3, we construct a stationary version Ũ of U under a possibly infinite equivalent measure, and point at the topological recurrence of U. Finally, in Section 4, we address the Darling-Kac theorem for the occupation measure of Ũ. § BACKGROUND ON ANOTHERGENERALIZED ORNSTEIN-UHLENBECK PROCESS We start by recalling the basic time-reversal propertyof Lévy processes, also known as the duality identity, whichplays an important role in this subject. If we denote ξ̂ the so-called dual Lévy process which has the same law as -ξ, then for every t>0, there is the identity in distribution between càdlàg processes (-ξ_t+ξ_(t-s)-)_0≤ s ≤ t (d)= (ξ̂_s)_0≤ s ≤ t. Following Carmona et al. <cit.> and Lindner and Maller <cit.>, as well as other authors, weassociate to the Lévy process ξ another generalized Ornstein-Uhlenbeck process (V(t))_t≥ 0,V(t)exp(-ξ_t)( I(t)+V(0))=∫_0^t exp(ξ_s-ξ_t) s +V(0)exp(-ξ_t), where the initial value V(0) is arbitrary and may be random. It was observed in <cit.> and<cit.> that the time-reversal property and the a.s. finiteness of the dual exponential functionalÎÎ(∞)= ∫_0^∞exp(ξ̂_s) s(which is known to follow from our assumption that ξ drifts to ∞, see Theorem 1 in <cit.>, or Theorem 2 in <cit.>),immediately implies that lim_t→∞V(t) = Îin distribution,independently of the initial value V(0).The distribution of Î,μ( x)(Î∈ x),x∈ (0,∞), thus plays a fundamental role in this setting; it has been studied in depth in the literature, see in particular <cit.> and references therein. Lindner and Maller (Theorem 2.1 in <cit.>) pointed at the fact that if V(0) has the same law as Î and is independent of ξ, then the process (V_t)_t≥ 0 is stationary. It will be convenient for us to rather work with a two-sided version(Ṽ_t)_t∈ which can easily be constructed as follows.Assume henceforth that (ξ̂_t)_t≥ 0 is an independent copy of (-ξ_t)_t≥ 0, and write (ξ̃_t)_t∈ for the two-sided Lévy process given byξ̃_t= {ξ_t if t≥ 0, ξ̂_|t|-if t< 0. . We then set for every t∈ Ĩ(t)∫_-∞^t exp(ξ̃_s) s and Ṽ(t)= exp(-ξ̃_t) Ĩ(t). Note that Ṽ(0)=Ĩ(0)=Î,so the process (Ṽ_t)_t≥ 0 is a version of V started from its stationary distribution. The next statement records some important properties of Ṽ that will be useful for this study. (i) The process (Ṽ_t)_t∈ is a stationary and strongly mixing Feller process, with stationary one-dimensional distribution μ.(ii) For every f∈ L^1( μ), we havelim_t→∞1/t∫_0^t f(Ṽ(s))s = ⟨μ,f⟩a.s. (i)By the time-reversal property, the two-sided process ξ̃ has stationary increments, in the sense that for every t∈, (ξ̃_t+s-ξ̃_t)_s∈ has the same law as (ξ̃_s)_s∈.This readily entails the stationarity of Ṽ. The Feller property has already been pointed at in Theorem 3.1 in <cit.>, so it only remains to justify the strong mixing assertion. Unsurprisingly[Ifthe Markov process V is μ-irreducible, then one can directly apply well-known facts about stochastic stability; see Part III in Meyn and Tweedie <cit.>. However, establishing irreducibilityfor arbitrary generalized Ornstein-Uhlenbeck processes seems to be a challenging task; see Section 2.3 in Lee <cit.> for a partial result.], this follows from (<ref>) by a monotone class argument that we recall for completeness.Letℒ^∞ denote the space of boundedmeasurable functions g: (0,∞)→ and𝒞_b the subspace of continuous bounded functions. Introduce the vector spaceℋ{g∈ℒ^∞: lim_t→∞(f(Ṽ(0))g(Ṽ(t)))=⟨μ,f⟩⟨μ,g⟩ for every f∈ℒ^∞}.We easily deduce from (<ref>) that 𝒞_b⊆ℋ. Then consider a non-decreasing sequence (g_n)_n∈ in ℋ with sup_n∈ g_n_∞<∞ and let g= lim_n→∞ g_n. For every f∈ℒ^∞, we have by stationarity (f(Ṽ(0))g(Ṽ(t)))= (f(Ṽ(-t))g(Ṽ(0))).So assuming for simplicity that f_∞≤ 1,the absolute difference|(f(Ṽ(0))g(Ṽ(t)))-⟨μ,f⟩⟨μ,g⟩|can be bounded from above by ( g(Ṽ(0)))-g_n(Ṽ(0))) +⟨μ,g-g_n⟩+|(f(Ṽ(0))g_n(Ṽ(t)))-⟨μ,f⟩⟨μ,g_n⟩| .The first two terms in the sum above coincide and can be made as small as we wish by choosing n large enough. Since g_n∈ℋ, this entailslim_t→∞ |(f(Ṽ(0))g(Ṽ(t)))-⟨μ,f⟩⟨μ,g⟩| ≤εfor every ε>0. Hence g∈ℋ, and since 𝒞_b is an algebra that contains the constant functions, we conclude by a functional version of the monotone class theorem that ℋ=ℒ^∞. (ii) Since strong mixing implies ergodicity, this follows from Birkhoff's ergodic theorem. We mention that the argument for Theorem <ref> applies more generally to the larger class of generalized Ornstein-Uhlenbeck processes considered by Lindner and Maller <cit.>. Further,sufficientconditions ensuring exponential ergodicity can be found in Theorem 4.3 of Lindner and Maller <cit.>, Lee <cit.>, Wang <cit.>,and Kevei <cit.>.§ A TIME SUBSTITUTION AND ITS CONSEQUENCESPatie <cit.> pointed out that the Ornstein-Uhlenbeck type processes U and V are related by a simple time-substitution. We shall see here that the same transformation, now applied to the stationaryprocess Ṽ, yields a stationary version Ũ of U, and then draw some consequences of this construction. Introducethe additive functionalA(t)∫_0^ts/Ṽ(s)=lnĨ(t) - lnĨ(0),t∈;clearly A: → is bijective and we denote the inverse bijection by T. Observe that A(T(t))=t yieldsthe useful identity∫_-∞^T(t)exp(ξ̃_s) s= Ĩ(0) ^t for all t∈. We also define a measure ν on (0,∞) by⟨ν,f⟩=∫_(0,∞)1/xf(1/x)μ( x), and furtherintroduce an equivalent sigma-finite measure on the underlying probability space (Ω, 𝒜, )by(Λ) = (1/Ṽ(0)1_Λ), Λ∈𝒜.Note that (f(1/Ṽ(0)))= ⟨ν,f⟩for every measurable f: (0,∞)→_+.(i) The measure(respectively, ν) is finite if and only if ξ_1∈ L^1(), and in that case, (Ω)=ν((0,∞))=(ξ_1). (ii) Under , Ũ(t) 1/Ṽ(T(t)),t∈is a stationary and ergodic Markov process, with one-dimensional marginal ν and the same semigroup as the Ornstein-Uhlenbeck type process U defined in (<ref>). (iii) For all functions f,g∈ L^1(ν) with ⟨ν,g⟩≠ 0, we havelim_t→∞∫_0^t f(Ũ(s)) s/∫_0^t g(Ũ(s)) s= ⟨ν,f⟩/⟨ν,g⟩-a.s. and therefore also -a.s.(i) Recall that Ṽ(0)=Ĩ(0)=Î, so (Ω)=ν((0,∞))=(1/Î). When ξ_1∈ L^1(), Equation (3) in <cit.> gives (1/Î)=(ξ_1).Next, suppose that ξ_1^-∈ L^1() and ξ_1^+∉ L^1(), that is the mean (ξ_1) exists and is infinite.We can construct by truncation of the large jumps of ξ, an increasing sequence (ξ^(n))_n∈ of Lévy processes such that ξ^(n)_1∈ L^1() with (ξ^(n)_1)>0 and lim_n→∞ξ^(n)_t=ξ_t for all t≥ 0 a.s. In the obvious notation,Î^(n) decreases to Îas n→∞, and lim_n→∞(ξ^(n)_1)=∞. We conclude by monotone convergence that (1/Î)=∞.Finally, suppose that both ξ_1^-∉ L^1() and ξ_1^+∉ L^1(), so the mean of ξ_1 is undefined. Equivalently, in terms of the Lévy measure, say Π, of ξ, we have ∫_(-∞,-1)|x| Π( x)= ∫_(1,∞) x Π( x)= ∞ ,see Theorem 25.3 in Sato <cit.>. Using Erickson's test characterizing Lévy processes which drift to ∞ when the mean is undefined (see Theorem 15 in Doney <cit.>), it is easy to decompose ξinto the sum ξ=ξ'+η of two independent Lévy processes, such that ξ' is a Lévy process with infinite mean andη is a compound Poisson process with undefined mean that drifts to ∞. The event Λ{η_t≥ 0for all t≥ 0} has a positiveprobability (because η is compound Poisson and drifts to ∞). On that event, we haveξ≥ξ' and thus also, in the obvious notation, Î≤Î'. This yields (1/Î, Λ) ≥(1/Î' )(Λ),and we have see above that the first term in the product is infinite. We conclude that (1/Î)=∞. (ii) It is convenient to view now Ω as the space of càdlàg paths ω: → (0,∞) endowed with the usual shift automorphisms (θ_t)_ t∈, i.e.θ_t(ω)=ω(t+·), andas the law of Ṽ.We have seen in Theorem <ref>(i) thatis (θ_t)-ergodic. General resultsdue to Maruyama and Totoki on time changes of flows based on additive functionals show that the measureis invariant for the time-changed flowof automorphisms (θ'_t)_ t∈, where θ'_t(ω)ω(T(t)+·)). See Theorems 4.1(iii) and 4.2 in <cit.>. Further, ergodicity is always preserved by such time substitutions, see Theorem 5.1 in <cit.>. This shows that (Ṽ(T(t)))_t≥ 0 is a stationary ergodic process under .On the other hand,time substitution based on an additive functional also preserves the strong Markov property, so (Ṽ(T(t)))_t≥ 0 is a Markov process under .By stationarity, (Ṽ(T(t)))_t∈ is Markov too. Composing with the inversion x↦ 1/x, we conclude that Ũ is a stationary and ergodic Markov process under . It remains to determine the semigroup of Ũ, and for this, we simply recall from Theorem 1.4 of Patie <cit.>that the processes U and V can be related by the same time-substitution as that relating Ũ and Ṽ. As a consequence, Ũ and U have the same semigroup.(iii) Under , this is a consequence of (ii) and Hopf's ratio ergodic theorem. See also Lemma 5.1 in <cit.>. The measuresandbeing equivalent, the statement of convergence also holds -a.s. We mention that, alternatively, this can also be deduced from Birkhoff's ergodic theorem for Ṽ (Theorem <ref>(ii)) by change of variables. (i) In the case ξ_1∈ L^1(), Theorem <ref>(i-ii) agrees with the results in <cit.>; the arguments in the present work are however much simpler. We stress that one should not conclude from Theorem <ref>(i-ii) that U(t) then converges in distribution to the normalized version of ν. Actually this fails when the Lévy process is lattice-valued (i.e. ξ_t∈ r a.s. for some r>0, think for instance of the case when ξ is a Poisson process), because then the Ornstein-Uhlenbeck typeprocess U is periodic.(ii)Inverting the transformation à la Ornstein-Uhlenbeck incites us to setX̃ (t) tŨ(ln t)= t/Ṽ(T(ln t)),t>0,and the calculation in the proof ofTheorem <ref>(ii)yields the expression à la LampertiX̃(t)exp(ξ̃_τ̃(t)),with τ̃: (0,∞)→ the inverse of the exponential functional Ĩ. Theorem <ref>(ii) entails that under , X̃is a self-similar version X. We refer to <cit.> for an alternative similar construction which does not require working under an equivalent measure.(iii) If we write 𝒢 for the infinitesimal generator of the Feller process V, then the stationary of the law μ is is characterized by the identity ⟨μ,𝒢f⟩=0 for every f in the domain of 𝒢. Informally[The application of Volkonskii's formula is not legitimate, since the function x↦ 1/x is not bounded away from 0.], according to a formula of Volkonskii (see (III.21.6) in <cit.>), the infinitesimal generator 𝒢' of the time-changed process V∘ T is given by 𝒢' f(x)=x 𝒢f(x), so the measure μ'( x)x^-1μ( x) fulfills⟨μ',𝒢'f⟩=0 for every f in the domain of 𝒢, and thus should be invariant for the time-changed process V∘ T. We then recover the assertion that ν is invariant for Ũ = 1/(Ṽ∘ T).We conclude this section by discussing recurrence.Recall firstthat the support of the stationary law μ of the generalized Ornstein-Uhlenbeck process V is always an interval, say ℐ; seeHaas and Rivero <cit.> orLemma 2.1 in <cit.>. More precisely, excluding implicitly the degenerate case when ξ is a pure drift,ℐ=[0,1/b] if ξ is a non-deterministic subordinator with drift b>0,ℐ=[1/b, ∞) if ξ is non-deterministic and of finite variation Lévy process with no positive jumps and drift b > 0, andℐ=[0, ∞) in the remaining cases. Writing ℐ^o for the interior of ℐ, it is further readily checked that V(t) ∈ℐ^o for all t≥ 0 a.s. whenever V(0) ∈ℐ^o. The Ornstein-Uhlenbeck type process U is topologically recurrent, in the sense that for every x>0 with 1/x∈ℐ^o, U visits every neighborhood of x a.s., no matter its initial value U(0).It follows from (<ref>) and the Portmanteau theorem that every point x∈ℐ^o is topologically recurrent for the generalized Ornstein-Uhlenbeck process V. Plainly, this property is preserved by time-substitution.§ ON THE DARLING-KAC THEOREMWe assume throughout this section that ξ_1∉L^1(), so ν (and also ) is an infinite measure. Aaronson's ergodic theorem (see, e.g. Theorem 2.4.2 in <cit.>) states that for every f∈ L^1(ν), f≥ 0, and every potential normalizing function a: _+→ (0,∞), one always have eitherlim sup_t→∞1/a(t)∫_0^t f(Ũ(s)) s = ∞a.s.or lim inf_t→∞1/a(t)∫_0^t f(Ũ(s)) s = 0 a.s. Without further mention, we shall henceforth implicitly work under the probability measure , andsay that a family (Y(t))_t>0 of random variables has a non-degenerate limit in distribution as t→∞if Y(t) converges in law towards some not a.s. constant random variable. Motivated by the famous Darling-Kac's theorem, the purpose of this section is to provide an explicit necessary and sufficient condition in terms of the Lévy process ξ for the existence of a normalizing function a: _+→ (0,∞) such thatthe normalized occupation measure of U converges in distribution as t→∞ to a non-degenerate limit. We start with the following simple observation. The following assertions are equivalent (i) For every f∈ L^1(ν) with⟨ν,f⟩≠0,1/a(t)∫_0^t f(Ũ(s)) s,t>0has a non-degenerate limit in distribution as t→∞.(ii) (T(t)/a(t))_t>0 has anon-degenerate limit in distribution as t→∞.Note that the identity function g(x)≡ 1/x always belongs to L^1(ν), actually with ⟨ν, g⟩ =1, and ∫_0^ts/Ũ(s)=T(t). The claim thus follows from Hopf's ratio ergodic theorem (Theorem <ref>(iii)) combined with Slutsky's theorem.For the sake of simplicity, we shall focus on the case when the sought normalizing function a:(0,∞)→ (0, ∞) is regularly varying at ∞ with index α∈(0,1). Recall from the Darling-Kac theorem (Theorem 8.11.3 in <cit.>) that this is essentially the only situation in which interesting asymptotic behaviors can occur. Recall also from Theorem 1.5.12 in <cit.> that a then possesses an asymptotic inverse b: (0,∞)→ (0,∞),in the sense that a(b(t))∼ b(a(t)) ∼ t as t→∞, such that b is regularly varying at ∞ with index 1/α.We may now state the main result of this work, which specifies the Darling-Kac theorem for Ornstein-Uhlenbeck type processes. The following assertions are equivalent: (i) b(t)^-1ξ_t has a non-degenerate limit in distribution as t→∞.(ii) Let f∈ L^1(ν) with⟨ν,f⟩≠0.Then 1/a(t)∫_0^t f(Ũ(s)) s,t>0has a non-degenerate limit in distribution as t→∞.In that case, the limit in (i) is a positive α-stable variable, say σ, with(exp(-λσ))=exp(-cλ^α)for some c>0, and the limit in (ii) has the law of⟨ν,f⟩σ^-α (and is thus proportional to a Mittag-Leffer variable with parameter α).In the case when ξ is a subordinator,Caballero and Rivero proved that the assertion (i) in Theorem <ref> is equivalent to the assertion (i) of Lemma <ref> with the weak limit there given by a Mittag-Leffler distribution; see Proposition 2 in <cit.>. Thus in that special case, Theorem <ref> follows directly fromProposition 2 in <cit.> and the present Lemma <ref>. Assume (i); it is well-known that the non-degenerate weak limitσ of b(t)^-1ξ_t is an α-stable variable, which is necessarily positive a.s. since ξ drifts to ∞. Recall that A(t)=lnĨ(t)-lnĨ(0) and writeξ_t = lnĨ(t) -lnṼ(t). We deduce from the stationarity of Ṽ andSlutsky's theorem that there is the weak convergeb(t)^-1A(t) ⟹σas t→∞.Using the assumption that b is an asymptotic inverse of a and recalling that b is regularly varying with index 1/α, this entails by a standard argument thata(t)^-1T(t) ⟹σ^-αas t→∞,and we conclude from Lemma <ref> (it is well-known that σ^-α is proportional to a Mittag-Leffler variable with parameter α; see for instance Exercise 4.19 in Chaumont and Yor <cit.>).Conversely, if (ii) holds for some f∈ L^1(ν) with⟨ν,f⟩≠0, then by Hopf's ergodic theorem and Lemma <ref>, a(t)^-1T(t) ⟹ Gas t→∞,for some non-degenerate random variable G. The same argument as above yieldsb(t)^-1ξ_t ⟹ G^-1/αas t→∞,and G^-1/α has to be a positive α-stable variable. More precisely,the argument of the proof shows thatwhen (i) is satisfied,the weak convergence in (ii) holds independently of the initial value Ũ(0). That is, equivalently, one may replace Ũ by U, the starting point U(0) being arbitrary. 2mm Acknowledgment: I would like to thank VíctorRivero for pointing at important references which I missed in the first draft of this work.plain
http://arxiv.org/abs/1706.08421v2
{ "authors": [ "Jean Bertoin" ], "categories": [ "math.PR" ], "primary_category": "math.PR", "published": "20170626145500", "title": "Ergodic aspects of some Ornstein-Uhlenbeck type processes related to Lévy processes" }
[NO \title GIVEN] [NO \author GIVEN] December 30, 2023 ====================== A descriptive approach for automatic generation of visual blends is presented. The implemented system, the Blender, is composed of two components: the Mapper and the Visual Blender. The approach uses structured visual representations along with sets of visual relations which describe how the elements – in which the visual representation can be decomposed – relate among each other. Our system is a hybrid blender, as the blending process starts at the Mapper (conceptual level) and ends at the Visual Blender (visual representation level). The experimental results show that the Blender is able to create analogies from input mental spaces and produce well-composed blends, which follow the rules imposed by its base-analogy and its relations. The resulting blends are visually interesting and some can be considered as unexpected. Cite as: Cunha, J. M.; Gonçalves, J.; Martins, P.; Machado, P.; and Cardoso, A. 2017. A pig, an angel and a cactus walk into a blender: A descriptive approach to visual blending. In: Proceedings of the Eighth International Conference on Computational Creativity (ICCC 2017). § INTRODUCTIONConceptual Blending (CB) theory is a cognitive framework proposed by Fauconnier and Turner fauconnier2002 as an attempt to explain the creation of meaning and insight. CB consists in integrating two or more mental spaces in order to produce a new one, the blend(ed) space. Here, mental space means a temporary knowledge structure created for the purpose of local understanding <cit.>.Visual blending, which draws inspiration from CB theory, is a relatively common technique used in Computational Creativity to generate creative artefacts in the visual domain. While some of the works are explicitly based on Conceptual Blending theory, as blending occurs at a conceptual level, other approaches generate blends only at a representation/instance level by means of, for example, image processing techniques. We present a system for automatic generation of visual blends (Blender), which is divided into two different parts: the Mapper and the Visual Blender. We follow a descriptive approach in which a visual representation for a given concept is constructed as a well-structured object (from here onwards when we use the term representation we are referring to visual representations). The object can contain other objects and has a list of descriptive relations, which describe how the object relates to others. The relations describe how the representation is constructed (example: part A inside part B). In our opinion, this approach allows an easier blending process and contributes to the overall sense of cohesion among the parts.Our system can be seen as a hybrid blender, as the blending process starts at the conceptual level (which occurs in the Mapper) and only ends at the visual representation level (which occurs in the Visual Blender). We use an evolutionary engine based on a Genetic Algorithm, in which each population corresponds to a different analogy and each individual is a visual blend. The evolution is guided by a fitness function that assesses the quality of each blend based on the satisfied relations. In the scope of this work, the focus is given to the Visual Blender.§ RELATED WORK In terms of the type of rendering, current computational approaches to visual blending can be divided into two groups: the ones which attempt to blend pictures or photorealistic renderings; and the ones that focus on non-photorealistic representations, such as pictograms or icons.The Boat-House Visual Blending Experience <cit.> is, to the best of our knowledge, one of the earliest attempts to computationally produce visual blends. The work was motivated by the need to interpret and visualize blends produced by a preliminary version of the Divago framework, which is one of the first artificial creative systems based on CB theory <cit.>. In addition to a declarative description of the concepts via rules and concept maps (i.e., graphs representing binary relations between concepts), Pereira and Cardoso also considered a domain of instances, which were drawn using a Logo-like programming language. To test the system, the authors performed several experiments with the house and boat blend <cit.> considering different instances for the input spaces.Ribeiro et al. ribeiro2003 explored the use of the Divago framework in procedural content generation. In this work, the role of Divago was to produce novel creatures at a conceptual level from a set of existing ones. Then, a 3D interpreter was used to visualize the objects. The interpreter was able to convert concept maps from Divago, representing creatures, into Wavefront OBJ files that could be rendered afterwards. Steinbrück steinbruck2013conceptual introduced a framework that formalises the process of CB while applying it to the visual domain. The framework is composed of five modules that combine image processing techniques with gathering semantic knowledge about the concept depicted in an image with the help of ontologies. Elements of the image are replaced with other unexpected elements of similar shape (for example, round medical tablets are replaced with pictures of a globe). Confalonieri et al. confalonieri2015 proposed a discursive approach to evaluate the quality of blends (although there is no evidence of an implementation). The main idea was to use Lakatosian argumentative dialogue <cit.> to iteratively construct valuable and novel blends as opposed to a strictly combinatorial approach. To exemplify the argumentativeapproach, the authors focused onicon design by introducing a semiotic system for modelling computer icons. Since icons can be considered as a combination of signs that can convey multiple intended meanings to the icon, Confalonieri et al. proposed argumentation to evaluate and refine the quality of the icons. Xiao and Linkola Xiao2015 proposed Vismantic, a semi-automatic system aimed at producing visual compositions to express specific meanings, namely the ones of abstract concepts. Their system is based on three binary image operations (juxtaposition, replacement and fusion), which are the basic operations to represent visual metaphors <cit.>. For example, Vismantic represents the slogan Electricity is green as an image of an electric light bulb where the wire filament and screw base are fused with an image of green leaves. The selection of images as well as the application of the visual operations require user's intervention.Correia et al. correia2016x proposed X-Faces, which can be seen as a data augmentation technique to autonomously generate new faces out of existing ones. Elementary parts of the faces, such as eyes, nose or mouth, are recombined by means of evolutionary algorithms and computer vision techniques. The X-Faces framework generates unexpected, yet realistic, faces by exploring the shortcomings and vulnerabilities of computational face detectors to promote the evolution of faces that are not recognised as such by these systems.Recent workssuch as DeepStyle <cit.> can also be seen as a form of visual blending. DeepStyle is based on a deep neural network that has the ability to separate image content from certain aspects of style, allowing to recombine the content of an arbitrary image with a given rendering style (style transfer). The system is known for mimicking features of different painting styles.Several other authors have seen the potential of deep neural networks for tasks related to visual blending <cit.>. For instance, Berov andKühnberger berov2016visual proposed a computational model of visual hallucination based on deep neural networks. To some extent, the creations of this system can be seen as visual blends. § THE APPROACHHaving the organization of mental spaces as an inspiration, we follow a similar approach to structure the construction of the visual representations, which are considered as a group of several parts / elements. By focusing on the parts instead of the whole, there is something extra that stands out: not only is given importance to the parts but the representation ceases to be a whole and starts to be seen as parts related to each other.As our goal is to produce visual results, these relations have a visual descriptive nature (i.e. the nature of the relation between two elements is either related to their relative position or to their visual qualities). This allows the generation of visual blends, guided and evaluated by criteria imposed by the relations present in the base-representations (see Fig.<ref>) used in the visual blend production. In addition, by using a representation style that consists of basic shapes, we reduce the concept to its simplest form, maintaining its most important features and thus, hopefully, capturing its essence (a similar process can be seen in Picasso's The Bull, a set of eleven lithographs produced in 1945). As such, our approach can be classified as belonging to the group of non-photorealistic visual blending. This simplification of concepts has as inspiration several attempts to produce a universal language, understandable by everyone – such as the pictographic ISOTYPE by Otto Neurath neurath1936international or the symbolic Blissymbolics by Charles Bliss bliss1965semantography.As already mentioned, our main idea is centered on the fact that the construction of a visual representation for a given concept can be approached in a structured way. Each representation is associated with a list of descriptive relations (e.g.: part A below part B), which describes how the representation is constructed. Due to this, a visual blend between two representations is not simply a replacement of parts but its quality is assessed based on the number of relations that are respected. This gives much more flexibility to the construction of representations by presenting a version of it and also allowing the generation of similar ones, if needed. The initial idea involved only a representation for each concept. However, a given concept has several possible visual representations (e.g. there are several possible ways of visually representing the concept car), which means that only using one would make the system very limited.In order to avoid biased results, we decided to use several versions for each concept. Each visual representation can be different (varying in terms of style, complexity, number of characteristics and even chosen perspective) and thus also having a different set of visual relations among the parts. In comparison to the systems described in the previous Section, we follow a different approach to the generation of visual blends by implementing a hybrid system and giving great importance to the parts and their relations – such tends to be overlooked by the majority of the reviewed works in which an unguided replacement of parts often leads to a lack of cohesion among them. This approach allows us not only to assess the quality of the blends and guide evolution but also to easily generate similar (and also valid) blends based on a set of relations. §.§ Collecting dataThe initial phase of the project consisted in a process of data collection. Firstly, a list of possible concepts was produced by collecting concepts already used in the conceptual blending field of research. From this list, three concepts were selected based on their characteristics: angel (human-like), pig (animal) and cactus (plant) – collected fromCostello:CC_is_not_Structure_Alignment01. The goal of this phase was to collect visual representations for these concepts. An enquiry to collect the desired data was designed, which was composed of five tasks: T1 Collection of visual representations for the selected concepts;T2 Identification of the representational elements; T3 Description of the relations among the identified elements;T4 Identification of the prototypical elements – i.e. the element(s) that most identify a given concept <cit.>. For instance, for the concept pig most participants considered nose and tail as the prototypical elements;T5 Collection of visual blends for the selected concepts.The data was collected from nine participants who were asked to complete the required tasks. In the first task (T1), the participants were asked to draw a representation for each concept avoiding unnecessary complexity but still representing the most important elements of the concept. In order to achieve intelligible and relatively simple representations, the participants were suggested to use primitives such as lines, ellipses, triangles and quadrilaterals as the basis for their drawings. After completing the first version, a second one was requested. The reason for two versions was to promote diversity. In the second task (T2), the participants identified the elements drawn using their own terms (for example, for the concept angel some of the identified elements were head, halo, legs).After completing the previous task, the participants were asked to identify the relations among elements that they considered as being essential essential (T3). These relations were not only related to the conceptual space but also (and mostly) to the representation. In order to help the participants, a list of relations was provided. Despite being told that the list was only to be considered as an example and not to be seen as closed, all the participants used the relations provided – this ensured the semantic sharing between participants. Some participants suggested other relations that were not on the list –these contributions were well-received. The identified relations are dependent on the author's interpretation of the concept, which can be divided into two levels. The first level is related to how the author interprets the connections among the concepts of the parts at a conceptual level (for example car, wheel or trunk). The second level is related to the visual representation being considered: different visual representations may have different relations among the same parts (this can be caused, for example, by the change of perspective or style) – e.g. the different positioning of the head in the two pig representations in Fig.<ref>.Task four (T4) consisted in identifying the prototypical parts of the representations – the parts which most identify the concept <cit.>. These will be used for interpreting the results obtained and for posterior developments. In the last task of the enquiry (T5), the participants were asked to draw representations for the blends between the three concepts. As a blend between two concepts can be interpreted and posteriorly represented in different ways (e.g. just at a naming level a blend between pig and cactus can be differently interpreted depending on its name being pig-cactus or cactus-pig). For this reason, the participants were asked to draw one or more visual representations for the blend. These visual representations were later used for comparing with the results obtained with the Visual Blender. §.§ Post-enquiryAfter the conduction of the enquiry, the data was treated in order to be used by the Visual Blender. Firstly, the representations collected for each of the concepts were converted into fully scalable vector graphics (see Fig. <ref>) and prepared to be used as base visual representations (see Fig.<ref>) for the Visual Blender (using layer naming according to the data collected for each representation – each layer was named after its identified part). In addition to this, the relations among parts were formatted to be used as input together with their corresponding representation. § THE VISUAL BLENDER As already mentioned, the Blender has two different components: the Mapper and the Visual Blender (see Fig.<ref>). The Mapper receives two input spaces (represented as 1 in Fig.<ref>), one referring to concept A and the other one to concept B. It produces analogies (3 in Fig.<ref>) that are afterwards used by the Visual Blender component. The Visual Blender also receives visual representations and corresponding list of relations among parts (2 in Fig.<ref>) that are used as a base and data for producing the visual blends (4 in Fig.<ref>). As this paper is focused on the Visual Blender component, the Mapper is only briefly described (subsection Generating the blends: structural mapping). Despite being related, the two components have different implementation details (e.g. object structure).§.§ Generating the blends: structural mappingIn Conceptual Blending theory, after the selection of input spaces, the subsequent step is to perform a partial matching between elements of the given mental spaces. This can be seen as establishing an analogy between the two inputs. Our input spaces are in the form of semantic maps composed of N_c concepts and N_t triples, with N_t, N_c ∈ℕ. The triples are in the form <concept_0,relation,concept_1>. Each concept corresponds to a vertex in a generic graph and the relation represents a directed edge connecting both concepts. The Mapper iterates through all possible root mappings, each composed of two distinct concepts taken from the input spaces. This means that there is a total of N_c2 iterations. Then, the algorithm extracts two isomorphic sub-graphs from the larger input space. The two sub-graphs are split in two sets of vertices A (left) and B (right). The structural isomorphism is defined by the sequence of relation types (pw, isa,...) found in both sub-graphs. Starting at the root mapping defined by two (left and right) concepts, the isomorphic sub-graphs are extracted from the larger semantic structure (the input spaces) by executing two synchronised expansions of nearby concepts at increasingly depths. The first expansion starts from the left concept and the second from the right concept. The left expansion is done recursively in the form of a depth first expansion and the right as a breadth first expansion. The synchronisation is controlled by two mechanisms: * the depth of the expansion, which is related to the number of relations reached by each expansion, starting at either concept from the root mapping; * the label used for selecting the same relation to be expanded next in both sub-graphs. Both left (depth) and right (breadth) expansions are always synchronized at the same level of deepness (first mechanism above). While expanding, the algorithm stores additional associations between each matched relations and the corresponding concept which was reached through that relation. In reality, what is likely to happen is to occur a multitude of isomorphisms. In that case, the algorithm will store various mappings from any given concept to multiple different concepts, as long as the same concepts were reached from a previous concept with the same relation. In the end, each isomorphism and corresponding set of concept mappings gives rise to an analogy. The output of the Mapper component is a list of analogies with the greatest number of mappings. §.§ Generating the blends: construction and relations The Visual Blender component uses structured base-representations (of the input concepts) along with their set of relations among parts to produce visual blends based on analogies (mappings) produced by the Mapper component.The way of structuring the representations is based on the Syntactic decomposition of graphic representations proposed by von Engelhardt engelhardt2002 in which a composite graphic object consists of: a graphic space (occupied by the object); a set of graphic objects (which may also be composite graphic objects); and a set of graphic relations (which may be object-to-space and/or object-to-object).The objects store several attributes: name, shape, position relative to the father-object (which has the object in the set of graphic objects), the set of relations to other objects and the set of child-objects. By having such a structure, the complexity of blending two base representations is reduced, as it facilitates object exchange and recursive changing (by moving an object, the child-objects are also easily moved).A relation between two objects consists of: the object A, the object B and the type of relation (above, lowerPart, inside, ...) – e.g. eye (A) inside head (B).§.§ Generating the blends: visual blending The Visual Blender receives the analogies between two given concepts produced by the Mapper component and the blend step occurs during the production of the visual representation – differently from what happens in The Boat-House Visual Blending Experience <cit.>, in which the blends are merely interpreted at the visual representation level.The part of the blending process that occurs at the Visual Blender produces visual representations as output and consists of five steps: S1 An analogy is selected from the set of analogies provided by the Mapper;S2 One of the concepts (either A or B) is chosen as a base (consider A as the chosen one, as an example); S3 A visual representation (rA) is chosen for the concept A and a visual representation (rB) is chosen for the concept B;S4 Parts of rA are replaced by parts of rB based on the analogy. For each mapping of the analogy – consider for example leg of A corresponds to arm of B – the following steps occur: S4.1 The parts from rA that correspond to the element in the mapping (e.g. leg) are searched using the names of the objects. In the current example, the parts found could be left_leg (left_ is a prefix), right_leg_1 (right_ is a prefix and _1 a suffix) or even leftfront_leg;S4.2 For each of the found parts in S4.1, a matching part is searched in rB using the names of the objects. This search firstly looks for objects that match the full name, including the prefix and suffix(e.g. right_arm_1) and, if none is found, searches only using the name in the mapping (e.g. arm). It avoids plural objects (e.g. arms). If no part is found, it proceeds to step S4.4;S4.3 The found part (pA) of rA is replaced by the matching part (pB) of rB, updating the relative positions of pB and its child-objects, and relations (i.e. relations that used to belong to pA now point to pB);S4.4 A process of Composition occurs (see examples in Fig.<ref> – the tail and the belly / round shape in the triangular body are obtain using composition). For each of the matching parts from rB (even if the replacement does not occur) a search is done for parts from rB that have a relation with pB (for example, a found part could be hand). It only accepts a part if rA does not have a part with the same name and if the analogy used does not have a mapping for it. If a found part matches these criteria, a composition can occur by copying the part to rA (in our example, depending on either the replacement in Step S4.3 occurred or not, rA would have either hand related to arm or to leg, respectively); S5 The rA resulting from the previous steps is checked for inconsistencies (both in terms of relative positioning and obsolete relations – which can happen if an object does not exist anymore due to a replacement);After generating a representation, the similarity to the base representations (rA and rB) is assessed to avoid producing representations visually equal to them. This assessment is done by using a Root Mean Square Error (RMSE) measure that checks the similarity on a pixel-by-pixel basis. §.§ Evolutionary Engine The main goal of the Visual Blender component is to produce and evolve possible visual blends based on the analogies produced by the Mapper. In order to achieve this and promote diversity while respecting each analogy, an evolutionary engine was implemented. This engine is based on a Genetic Algorithm (GA) using several populations (each corresponding to a different analogy), in which each individual is a visual blend.In order to guide evolution, we adopt a fitness function that assesses how well the the existing relations are respected. Some of the relations, e.g. the relation above,have a binary assessment – either 0, when the relation is not respected, or 1 when it is respected. Others yield a value between 0 and 1 depending on how respected it is – e.g. the relation inside calculates the number of points that are inside and returns #PointsInside/total#Points.The fitness function for a given visual blend b is as follows: f(b) = ∑^#R(b)_i=1v(r_i(b))/#R(b), where #R(b) denotes the number of relations present in b and v is the function with values in [0,1] that indicates how much a relation r is respected (0 – not respected at all, 1 – fully respected).The evolutionary engine includes five tasks which are performed in each generation for each population: T1 Produce more individuals when the population size is below the maximum size;T2 Store thebest individual to avoid loosing it (elitism);T3 Mutate the individuals of the population. For each individual, each object can be mutated by changing its position. This change also affects its child-objects;T4 Recombine the individuals: the parents are chosen using tournament selection (with size 2) and a N-point crossover is used to produce the children. In order to avoid the generation of invalid individuals, the crossover only occurs between chromosomes (objects) with the same name (e.g. a head is only exchanged with a head). If this rule was not used, it would lead to the production of descendants that would not respect the analogy followed by the population;T5 Removal of identical individuals in order to increase variability.In the experiments reported in this paper the mutation probability was set to 0.05, per gene, and the recombination probability to 0.2, per individual. These values were established empirically in preliminary runs.§ RESULTS AND DISCUSSION In this section we present and discuss the experimental results. We begin with a general analysis. Afterwards, we analyse the resulting visual representations comparing them with the data collected in the initial enquiry. Then, we analyse the quality of the produced blends by presenting the results of a final enquiry focused on perception. Overall, the analysis of the experimental results indicates that the implemented blender is able to produce sets of blends with great variability (see Fig.<ref> for an example of the results obtained for the same analogy and the same relations) and unexpected features, while respecting the analogy. The evolutionary engine is capable of evolving the blends towards a higher number of satisfied relations. This is verifiable in numerical terms, through the analysis of the evolution of fitness, and also through the visual assessment of the results. Figure <ref> illustrates the evolution of a blend: the legs and tail are iteratively moved towards the body in order to increase the degree of satisfaction of the relations.We can also observe that the system tends to produce blends in which few parts are exchanged between concepts. This can be explained as follows: when the number of parts increases the difficulty of (randomly) producing a blend with adequate fitness drastically decreases. As such, blends with fewer exchanges of parts, thus closer to base representation (in which all the relations are satisfied), tend to become dominant during the initial generations of the evolutionary runs. We consider that a significantly higher number of runs would be necessary to produce blends with more exchanges. Furthermore, valuing the exchange of parts, through the modification of the fitness function, may also be advisable for promoting the emergence of such blends.As the blends are being produced as a visual representation which works as a whole as well as a set of individual parts, the Principle of Integration is being respected by design – from the Optimality Principles presented by Fauconnier and Turner FauconnierTurner:98. §.§ Comparison with user-drawn blendsDuring the initial phase of the project, we conducted a task of collecting visual blends drawn by the participants. A total of 39 drawn blends were collected, from which 14 correspond to the blend between cactus and angel, 12 correspond to the blend between cactus and pig and 13 correspond to the blend between pig and cactus.The implemented blender was able to produce visual blends similar to the ones drawn by the participants (see some examples in Fig. <ref>). After analysing the produced blends, the following results were obtained: * 23 from the 39 drawn blends (DB) were produced by our Blender;* 2 are not possible to be produced due to inconsistencies (e.g. one drawn blend from angel-pig used a mapping from wing-tail and at the same time maintained the wings);* 6 were not able to be produced in the current version due to mappings that were not produced by the Mapper (e.g. head from angel with body from cactus);* 5 were not able to be produced because not all of the collected drawn representations were used in the experiments. According to the aforementioned results, the implemented Blender is not only able to produce blends that are coherent with the ones drawn by participants but is also able to produce novel blends that no participant drew, showing creative behaviour.§.§ Evaluating perceptionIn order to assess if the produced blends could be correctly perceived, a second enquiry was conducted. The main goal was to evaluate whether or not the participant could identify the input spaces used for each blend (i.e. if it was possible to identify pig and cactus in a blend produced for pig-cactus). This is related to the Unpacking Principle <cit.>.In the first enquiry, the fourth task (T4) consisted in collecting the prototypical parts for each concept – these are the parts that most identify the concept (e.g. wing for angel).We used the data collected for producing the second enquiry. For each blend (angel-pig, cactus-pig or angel-cactus), four visual blends were selected (two considered “good” and two considered “bad”, see Fig. <ref>). The quality evaluation (“bad” or “good”) was based on two criteria: fitness of the individual and presence or legibility of the prototypical parts (i.e. a “good” exemplar is an individual with the prototypical parts clearly identifiable; a “bad” exemplar is an individual with fewer prototypical parts or these are not clearly identifiable).A total of 12 visual blends were used and the enquiry was conducted to 30 participants. Each visual blend was tested by 5 participants. In order minimise the biasing of the results, each participant evaluated two visual representations (one “bad” and one “good”) of different blends (e.g. when the first was of cactus-pig, the second could only be of angel-pig or angel-cactus). The “bad” blends were evaluated first to further minimise the biasing.The results (Table <ref> and Table <ref>), clearly show that the “good” blends were easier to be correctly named (the percentage of total correct naming is always higher for the “good” examples; the percentage of total incorrect naming is always higher for the “bad” blends). In addition to this, the names of the input spaces were also easier to be identified in some of the representations than in others (e.g. the “good” blends for angel-pig received more totally correct answers than the rest of the blends, as shown in Table <ref>).Overall, the majority of the participants could identify at least one of the input spaces for the “good” exemplars of visual blends. Even though some of the participants could not correctly name both of the input spaces, the answers given were somehow related to the correct ones (e.g. the names given for the input spaces inthe first “bad” blend of 3 in Fig. <ref> were often pig and lady/woman, instead of pig and angel – this is due to the fact that no halo nor wings are presented). § CONCLUSIONS AND FUTURE WORK We presented a descriptive approach for automatic generation of visual blends. The approach uses structured representations along with sets of visual relations which describe how the parts – in which the visual representation can be decomposed – relate among each other. The experimental results demonstrate the ability of the Blender to produce analogies from input mental spaces and generate a wide variety of visual blends based on them. The Visual Blender component, in addition to fulfilling its purpose, is able to produce interesting and unexpected blends.Future enhancements to the proposed approach include: (i) exploring an island approach in which exchange of individuals from different analogies may occur if they respect the analogy of the destination population; (ii) exploring the role of the user (guided evolution), by allowing the selection of individuals to evolve;(iii) considering Optimality Principles in the assessment of fitness (e.g. how many parts are exchanged) and exploring which of them may be useful or needed – something discussed by Martins et al. martins2016optimality;(iv) using relations such as biggerThan or smallerThan to explore style changing(e.g. the style of the produced blends will be affected if a base visual representation has head biggerThan body); (v) exploring context in the production of blends (e.g. stars surrounding the angel). § ACKNOWLEDGEMENTS This research is partially funded by: Fundação para a Ciência e Tecnologia (FCT), Portugal, under the grant SFRH/BD/120905/2016.iccc
http://arxiv.org/abs/1706.09076v3
{ "authors": [ "João M. Cunha", "João Gonçalves", "Pedro Martins", "Penousal Machado", "Amílcar Cardoso" ], "categories": [ "cs.AI", "cs.GR" ], "primary_category": "cs.AI", "published": "20170627233722", "title": "A Pig, an Angel and a Cactus Walk Into a Blender: A Descriptive Approach to Visual Blending" }
Linking the rotation of a rigid body to the Schrödinger equation: The quantum tennis racket effect and beyond L. Van Damme, D. Leiner, P. Mardešić[Institut de Mathématiques de Bourgogne, UMR 5584 CNRS-Université de Bourgogne Franche-Comté, 9 Av. A. Savary, BP 47870 21078 Dijon Cedex, France], S. J. Glaser[Department of Chemistry, Technische Universität München, Lichtenbergstrasse 4, D-85747 Garching, Germany], D. Sugny[Laboratoire Interdisciplinaire Carnot de Bourgogne (ICB), UMR 6303 CNRS-Université Bourgogne-Franche Comté, 9 Av. A. Savary, BP 47 870, F-21078 Dijon Cedex, France and Institute for Advanced Study, Technische Universität München, Lichtenbergstrasse 2 a, D-85748 Garching, Germany, [email protected]] December 30, 2023 =====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================The design of efficient and robust pulse sequences is a fundamental requirement in quantum control. Numerical methods can be used for this purpose, but with relatively little insight into the control mechanism. Here, we show that the free rotation of a classical rigid body plays a fundamental role in the control of two-level quantum systems by means of external electromagnetic pulses. For a state to state transfer, we derive a family of control fields depending upon two free parameters, which allow us to adjust the efficiency, the time and the robustness of the control process. As an illustrative example, we consider the quantum analog of the tennis racket effect, which is a geometric property of any classical rigid body. This effect is demonstrated experimentally for the control of a spin 1/2 particle by using techniques of Nuclear Magnetic Resonance. We also show that the dynamics of a rigid body can be used to implement one-qubit quantum gates. In particular, non-adiabatic geometric quantum phase gates can be realized based on the Montgomery phase of a rigid body. The robustness issue of the gates is discussed. Quantum control is aimed at manipulating dynamical processes at microscopic scales by means of external electromagnetic fields <cit.>. Its successful experimental implementation requires robustness against parameter fluctuations and uncertainties, but also high efficiency in a sufficiently short time to avoid parasitic phenomena such as relaxation. These objectives can be viewed as a crucial prerequisite for a wide range of applications of such techniques in the emerging domain of quantum technologies <cit.>. In this setting, numerical algorithms based on optimal control theory <cit.> have been developed to realize a given task, while minimizing the control time and accounting for experimental constraints and imperfections <cit.>. In spite of its efficiency, this approach does not give a clear insight into the control mechanism, which makes it system-dependent and prevents its generalization. The physical understanding of a control process can be extracted from a geometric analysis of the dynamics <cit.>. The geometric properties of the corresponding physical effect will ensure its robustness against experimental errors and thus its usefulness <cit.>. The richness of this geometric approach is illustrated by the Berry phase in quantum mechanics <cit.>. The discovery of the Berry phase led to an impressive amount of studies both in quantum physics and chemistry. Geometric control protocols, resilient to certain types of experimental uncertainties, were developed in quantum computing from this effect <cit.>. In this work, we propose to use the study of the free rotation of a rigid body to develop new geometric quantum control strategies. A geometric property, known as the Tennis Racket Effect (TRE) <cit.>, will be used as an illustrative example to describe this method. This phenomenon occurs in the free rotation of any three-dimensional rigid body <cit.>. It can be easily observed with a tennis racket through the following experimental protocol. We first mark the different faces of the head of the racket. We then take the racket by the handle and throw it in the air so that the handle makes a 2π rotation. After catching the handle, we observe that the head of the tennis racket has made a flip of π. This effect can be reproduced for many different rigid bodies and a large range of initial conditions, corresponding to the initial inclination and velocity of the head of the racket, showing thus its robustness, inherent to its geometric character. An illustration of TRE is given in Fig. <ref>. A complete mathematical description of TRE was given in a series of papers <cit.>. These analyses are based on the fact that the free rotation of a rigid body <cit.> is an integrable system whose trajectories can be derived analytically by using Jacobi elliptic functions <cit.>. Here, we show that the TRE, and more generally the dynamics of a rigid body, find remarkable applications in the control of two-level quantum systems <cit.>. We first obtain a family of control fields based on the TRE allowing to manipulate the state of the system in a robust manner with respect to some experimental uncertainties. Such fields depend on two parameters that can be adjusted to change the time, the efficiency and the robustness of the control process. We introduce the concept of a quantum TRE, which is the analog of the classical motion at the quantum level and we point out its specific quantum properties. The TRE control strategy is demonstrated experimentally on a spin 1/2 particle by using techniques of Nuclear Magnetic Resonance <cit.>. We also show that the dynamics of a rigid body allows us to design control fields to realize one-qubit quantum gates. In particular, we focus on the Montgomery phase <cit.>, a geometric feature of the free rotation of a rigid body, which leads to quantum geometric phase gates <cit.>in the non-adiabatic regime.A formal equivalence can be established between the free rotation of a rigid body and the dynamics of a spin 1/2 particle, which are governed respectively by the Euler and the Bloch equations. The two systems of differential equations have a similar mathematical structure of the form Ẋ⃗̇=H(t)X⃗ where the matrix H(t) can be written as follows:H(t)=([0 -Ω_3Ω_2;Ω_30 -Ω_1; -Ω_2Ω_10 ]).The state of the system is described by the vector X⃗(t) and the Ω_is denote the angular velocities along the i-direction, i=1,2,3. We refer the reader to Supp. Sec. I or to textbook references for technical details <cit.>. The vector X⃗ can be identified with the angular momentum L⃗ of the rigid body (in the frame attached to the racket) or with the Blochvector M⃗ of the spin (in a given rotating frame <cit.>). In the classical system, the components of L⃗=(L_1,L_2,L_3) can be expressed in terms of the Ω_is through the principal moments of inertia I_i, L_i=I_iΩ_i (in the principle axis system of the inertia tensor), while in the quantum case, the angular velocities refer to external control fields applied along a given direction. The classical system admits two constants of motion making it Liouville integrable, namely the energy E=L_1^2/2I_1+L_2^2/2I_2+L_3^2/2I_3 and the norm of the angular momentum, L_1^2+L_2^2+L_3^2, which can be set to 1.If the control fields applied to the spin are exactly equal to the angular velocities of the rigid body then a one-to-one mapping can be defined between the trajectories of the classical and quantum objects. The moments of inertia can be viewed as additional degrees of freedom used to design control fields with specific properties (see Supp. Sec. II and III). In some experimental applications, only two external fields are available. In this limiting case, an ideal rigid body for which one of the moments of inertia goes to infinity can be considered. Note that the different geometric features of a rigid body are not modified in this limit. In the rest of the paper, we will assume that the three moments of inertia are equal to 1, 1/k^2 with k∈ ]0,1[, and +∞.Classical and quantum tennis racket effects. Returning back to the dynamical behavior of the tennis racket, two fundamental motions can be considered (See Supp. Sec. I and IV).The first motion is associated with the angular momentum L⃗ in the reference frame of the racket. During the rotation of the racket, the angular momentum is brought from its initial position to the diametrically opposite one. The different trajectories that can be followed by L⃗ are displayed in Fig. <ref>. The classical phase space has a simple structure made of a separatrix which is the boundary between two families of trajectories: the rotating and the oscillating ones, each distributed around a stable fixed point <cit.>. In the example of Fig. <ref> where Ω_2(t)=0, I_1=1 and I_3=1/k^2, a transfer from the north pole (e⃗_3) to the south pole (-e⃗_3) can be achieved on the Bloch sphere by following the separatrix (see Supp. Sec. II). In the quantum mechanical setting, the control requires an infinite time to be performed and corresponds to an Allen-Eberly type pulse sequence of the form <cit.>:Ω_1(t)=± 1/√(1-k^2)sech(t+t_0); Ω_3(t)=± k/√(1-k^2)tanh(t+t_0),where t_0 is an arbitrary constant time.The formal equivalence used in this work leads therefore to an insightful geometric interpretation of the Allen-Eberly solution as a singular trajectory of a classical rigid body (see Supp. Sec. II). In addition to this control strategy, two families of control fields can be derived from TRE. Such solutions, called TRE pulses, correspond to the oscillating or rotating trajectories close to the separatrix, which bring approximately the system from the north to the south pole of the Bloch sphere. Each element of the two sets can be characterized by the parameter k and a small positive constant ϵ, which describes the distance of the trajectory to the separatrix. The two parameters can be chosen to adjust the efficiency, the robustness and the time of the control process. More generally, we show in Supp. Sec. II by considering the whole range of variations of ϵ and k that a smooth transition can be established between π pulses of constant phase and Allen-Eberly solutions. In the case of Fig. <ref>, the rotation axis of the π pulse is associated with one of the two stable fixed points and the Allen-Eberly control with the separatrix. All the other trajectories, and in particular the TRE pulses, represent a compromise between the two solutions.To evaluate the robust character of the TRE pulse, we consider the control of an ensemble of spins with different offset frequencies δ and scaling factors α of the amplitude of the control field, the two parameters belonging to intervals fixed by the experimental setup. This description reproduces the standard experimental uncertainties due to the field inhomogeneities that can be encountered in NMR <cit.> or in quantum information processing <cit.>. In the numerical simulations, we replace in Eq. (<ref>) the three angular velocities by (1+α) Ω_1, (1+α) Ω_2 and Ω_3+δ. We denote by t_f and J_3=-M_3(t_f), the control time and the figure of merit of the process, respectively. The initial state is the north pole of the Bloch sphere. Figure <ref> shows the efficiency of the TRE pulse. We observe that the robustness of the process changes with the parameter k. It can be verified that this property does not depend on ϵ, for ϵ sufficiently small. The parameter ϵ affects predominantly the fidelity and the control time of the process. The analytical computations reveal that this time has a logarithmic divergence when ϵ goes to 0 (see Supp. Sec. II).A second relevant dynamical process is associated with the motion of the frame attached to the racket with respect to the laboratory frame (x,y,z). Denoting by R(t)∈ SO(3), the corresponding rotation matrix at a time t, whose dynamics is ruled by the equation Ṙ(t)=H(t)R(t), the final position of the racket is characterized in the ideal case by (see Supp. Sec. IV):R_f=([100;0 -10;00 -1 ]).This analogy can be interpreted as a first step towards the implementation of quantum gates, here a NOT gate. However, this transformation is less robust than the one of the angular momentum because the total time of the process has to be perfectly adjusted in order to realize the 2π rotation of the handle (see below and Supp. Sec. V). Note that the racket exactly goes back to its initial position after 2 TREs. This geometric phenomenon can be extended to a purely quantum property by using the standard mapping between SO(3) and SU(2). A quantum TRE is then defined from the solution of the Schrödinger equation id/dtU(t)=ℋ(t)U(t) where U(t)∈ SU(2) and ℋ is the 2× 2- Hamiltonian matrix with complex entries corresponding to the Hamiltonian H(t) of Eq. (<ref>), which is defined by:ℋ=1/2([ -Ω_3 Ω_1-iΩ_2; Ω_1+iΩ_2Ω_3 ]).We observe that after one TRE, the rotation matrix is given byU_f=(-i)([ 0 1; 1 0 ]),so that four TREs are needed for the quantum racket to come back to its initial state. A description of this quantum motion is displayed in Fig. <ref> by using the DROPS representation of the propagator U(t) <cit.>, illustrating the orientation of the effective rotation axis. While in the conventional experiment, this orientation is constant (along the x- axis), it follows in the TRE case a twisted trajectory from the y- via z- to the x- axes between t=0 and t=T_R.Experimental implementation. We now show by using NMR techniques the experimental performance of a TRE pulse to realize a state to state transfer from (0,1,0) to (0,-1,0). The evolution of the Bloch vector and the robustness of the shaped pulse with respect to the scaling factor α are displayed in Fig. <ref>. For the robustness experiments, we scaled the amplitude with factors ranging from α=-0.5 to α=0.5 in 11 steps. The total pulse duration was set to 0.448 ms. The figure of merit J_2 is here defined by J_2=-M_2(t_f). A reasonable match is found between theoretical and experimental results (See Supp. Sec. VI for a discussion about the experimental errors).Implementation of one-qubit gates. The correspondence between the free rotation of a rigid body and the dynamics of a spin 1/2 particle also provides novel control strategies in quantum computing <cit.>. In Supp. Sec. V, we show how to implement the Hadamard gate, and more generally any one-qubit quantum gate. Here, we focus on the case of the geometric quantum phase gates <cit.>, that can be realized by using a geometric feature of the free rotation of a rigid body, namely the Montgomery phase <cit.>. This phase can be defined by considering one period of the time evolution of the angular momentum L⃗ in the body-fixed frame. During this motion, the laboratory frame rotates about L⃗ by an angle, the Montgomery phase. This phase can be expressed as the sum of a dynamical and a geometric contribution <cit.>, this latter being given by S, the solid angle swept out by the angular momentum vector (see Supp. Sec. V). One of the main difficulties to realize geometric phase gates is to find a way to cancel the dynamical contribution of the phase in order to obtain a robust control protocol. Different techniques have been proposed up to date <cit.>. Geometric phase gates can be implemented in the adiabatic regime <cit.>, but it is possible to generalize this process to consider non-adiabatic cyclic evolution <cit.>, which is crucial to avoid decoherence effects. Only very simple motions, such as a circle on the Bloch sphere, were proposed. Using the Montgomery phase and the dynamics of a rigid body, this idea can be considerably extended, as illustrated in Fig. <ref>. In the example of Fig. <ref>, the loop on the Bloch sphere is the concatenation of two trajectories close to the separatrices with different values of k, which are adjusted to cancel the dynamical phases (see Supp. Sec. V). From this control process, any geometric phase gate can be implemented in the non-adiabatic regime. More generally, the dynamical phase is also at the origin of the relatively low robustness of the one-qubit quantum gates realized based on the dynamics of a rigid body. This property can be greatly improved by considering a generalization of the BIR- pulses used in NMR <cit.>. This control strategy consists in the concatenation of two (or more) pulses chosen so that the global dynamical phase is cancelled. The example of a NOT gate is described in Supp. Sec. V.Discussion. By using the formal analogy between the free rotation of a rigid body and the dynamics of a spin 1/2 particle, we have derived a new family of control fields able to realize either a state to state transfer or a specific unitary transformation in a two-level quantum system. As demonstrated in this paper, the derived pulses have an explicit and relatively simple form, which is therefore easily implementable experimentally. Note that a Matlab code computing the trajectories of a rigid body and of the corresponding Bloch vector is provided in Supp. Sec. VII.The results of this work pave the way to other analyses using the same kind of equivalence. The applicability of this analogy beyond simple two-level quantum systems, such as in a chain of three coupled spins <cit.>, shows the general interest of this approach. Following the method proposed in Ref. <cit.>, the control fields derived from the dynamics of a rigid body could also be used as a building block to realize a CNOT gate in this system. Another possible direction is the generalization of this study to SO(n), with n>3, for instance in the integrable case of the Manakov top <cit.>.32 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURLcat authorS. J. Glaser, authorU. Boscain, authorT. Calarco, authorC. P. Koch, authorW. K. Köckenberger, authorR. Kosloff, authorI. Kuprov, authorB. Luy, authorS. Schirmer, authorT. Schulte-Herbrüggen, et al., titleTraining Schrödinger's cat: Quantum optimal control, journalEur. Phys. J. D volume69, pages279 (year2015).Brif2010 authorC. Brif, authorR. Chakrabarti, and authorH. Rabitz,titleControl of quantum phenomena: Past, present and future, journalNew J. Phys. volume12, pages075008 (year2010).fidelitycontrol authorM. G. Bason, authorM. Viteau, authorN. Malossi, authorP. Huillery, authorE. Aeimondo, authorD. Ciampini, authorR. Fazio, authorV. Giovannetti, authorR. Mannella, and authorO. Morsch, titleHigh-fidelity quantum driving, journalNature Phys. volume8, pages147 (year2012).naturerobust authorD. Leibfried, authorB. DeMarco, authorV. Meyer, authorD. Lucas, authorM. Barrett, authorJ. Britton, authorW. M. Itano, authorB. Jelenkovic, authorC. Langer, authorT. Rosenband, et al., titleExperimental demonstration of a robust, high-fidelity geometric two ion-qubit phase gate, journalNature volume422, pages412 (year2003).QSL authorJ. Sorensen, authorM. Pedersen, authorM. Munch, authorP. Haikka, authorJ. Jensen, authorT. Planke, authorM. Andreasen, authorM. Gajdacz, authorK. Molmer, authorA. Lieberoth, and authorJ. Sherson, titleExploring the Quantum Speed Limit with Computer Games, journalNature volume532, pages210 (year2016).prlmintert authorT. Nöbauer, authorA. Angerer, authorB. Bartels, authorM. Trupke, authorS. Rotter, authorJ. Schmiedmayer, authorF. Mintert, and authorJ. Majer, titleSmooth Optimal Quantum Control for Robust Solid-State Spin Magnetometry, journalPhys. Rev. Lett. volume115, pages190801 (year2015).Pontryagin1964 authorL. S. Pontryagin, authorV. G. Bol'tanskii, authorR. S. Gamkrelidze, and authorE. F. Mischenko, titleThe Mathematical Theory of Optimal Processes (publisherPergamon Press, New York, year1964).Khaneja2005 authorN. Khaneja, authorT. Reiss, authorC. Kehlet, authorT. Schulte-Herbrüggen, and authorS. J. Glaser, titleOptimal control of coupled spin dynamics: design of NMR pulse sequences by gradient ascent algorithms, journalJ. Magn. Reson. volume172, pages296 (year2005).Khaneja2001 authorN. Khaneja, authorR. Brockett, and authorS. J. Glaser, titleTime optimal control in spin systems, journalPhys. Rev. A volume63, pages032308 (year2001).Lapert2010 authorM. Lapert, authorY. Zhang, authorM. Braun, authorS. J. Glaser, and authorD. Sugny, titleSingular Extremals for the Time-Optimal Control of Dissipative Spin 1/2 Particles, journalPhys. Rev. Lett. volume104, pages083001 (year2010).shortcut authorX. Chen, authorI. Lizuain, authorA. Ruschhaupt, authorD. Guéry-Odelin, and authorJ. G. Muga, titleShortcut to Adiabatic Passage in Two- and Three-Level Atoms, journalPhys. Rev. Lett. volume105, pages123003 (year2010).hoult authorM. S. Silver, authorR. I. Joseph, authorC.-N. Chen, authorV. J. Sank, and authorD. I. Hoult,titleSelective population inversion in NMR, journalNature volume310, pages681 (year1984).shortcutrobust authorA. Ruschhaupt, authorX. Chen, authorD. Alonso, and authorJ. G. Muga,titleOptimally robust shortcuts to population inversion in two-level quantum systems, journalNew J. Phys. volume14, pages093040 (year2012).shortcutrobust2 authorD. Daems, authorA. Ruschhaupt, authorD. Sugny, and authorS. Guérin,titleRobust quantum control by a single-shot shaped pulse, journalPhys. Rev. Lett. volume111, pages050404 (year2013). berryphase authorA. Bohm, authorH. Mostafazadeh, authorQ. Koizumi, and authorJ. Zwanziger, titleThe geometric phase in quantum systems (publisherSpringer, Berlin, year2003).nakahara authorM. Nakahara, titleGeometry, topology and physics (publisherInstitute of physics publishing, addressBristol and Philadelphia, year1990).berryqc1 authorJ. A. Jones, authorV. Vedral, authorA. Ekert, and authorG. Castagnoli, titleGeometric quantum computation using Nuclear Magnetic Resonance, journalNature volume43, pages869 (year2000a).berryqc2 authorA. A. Abdumalikov, authorJ. M. Fink, authorK. Juliusson, authorM. Pechal, authorS. Berger, authorA. Wallraff, and authorS. Filipp, titleExperimental realization of non-Abelian non-adiabatic geometric gates, journalNature volume496, pages482 (year2013).ashbaugh authorM. S. Ashbaugh, authorC. C. Chiconce, and authorR. H. Cushman, titleThe twisting tennis racket, journalJ. Dyn. Diff. Eq. volume3, pages67 (year1991).vandamme2015 authorL. Van Damme, authorP. Mardesic, and authorD. Sugny, titleThe tennis racket effect in a three-dimensional rigid body, journalPhysica D volume338, pages17 (year2017).arnold authorV. I. Arnold, titleMathematical methods of classical mechanics (publisherSpringer-Verlag, New York, year1989).goldstein authorH. Goldstein, titleClassical mechanics (publisherAddison-Wesley, Reading, M.A., year1950).cushman authorR. H. Cushman and authorL. M. Bates, titleGlobal aspects of classical integrable systems (publisherBirkhäuser, Berlin, year1997).Levitt08 authorM. H. Levitt, titleSpin Dynamics: Basics of Nuclear Magnetic Resonance (publisherWiley, New York, year2008).montgomery authorR. Montgomery,titleHow much does the rigid body rotate ? A Berry's phase from the 18th century, journalAm. J. Phys. volume59, pages394 (year1991). eberly authorL. Allen and authorJ. H. Eberly, titleOptical resonance and two-level atoms (publisherWiley, New York, year1975).chuang authorM. A. Nielsen and authorI. L. Chuang, titleQuantum computation and quantum information (publisherCambridge University Press, addressCambridge, year2000).garon authorA. Garon, authorR. Zeier, and authorS. J. Glaser, titleVisualizing Operators of Coupled Spins Systems, journalPhys. Rev. A volume91, pages042122 (year2015).sjoqvist authorE. Sjöqvist, titleA new phase in quantum computation, journalPhysics volume1, pages35 (year2008).geometricgateNA1 authorW. Xiang-Bin and authorM. Keiji, titleNonadiabatic conditional geometric phase shift with NMR, journalPhys. Rev. Lett. volume87, pages097901 (year2001).geometricgateNA2 authorS.-L. Zhu and authorZ. D. Wang, titleImplementation of univeral quantum gates based on nonadiabatic geometric phases, journalPhys. Rev. Lett. volume89, pages097902 (year2002).BR1 authorA. Tannus and authorM. Garwood, titleAdiabatic pulses, journalNMR Biomed. volume10, pages423 (year1997).BR2 authorM. Garwood and authorY. Ke, titleSymmetric pulses to induce arbitrary flip angles with compensation for RF inhomogeneity and resonance offsets, journalJ. Magn. Reson. volume94, pages511 (year1991).khaneja authorN. Khaneja, authorB. Heitmann, authorA. Spörl, authorH. Yuan, authorT. Schulte-Herbrüggen, and authorS. J. Glaser, titleShortest paths for efficient control of indirectly coupled qubits, journalPhys. Rev. A volume75, pages012322 (year2007).VZGS14 authorL. Van Damme, authorR. Zeier, authorS. J. Glaser, and authorD. Sugny,titleApplication of the Pontryagin maximum principle to the time-optimal control in a chain of three spins with unequal couplings, journalPhys. Rev. A volume90, pages013409 (year2014).perelomov authorA. M. Perelomov,titleMotion of four-dimensional rigid body around a fixed point: An elementary approach, journalJ. Phys. A volume38, pages801 (year2005).ACKNOWLEDGMENTS.J. Glaser acknowledges support from the DFG (Gl 203/7-2). D. Sugny and S. J. Glaser acknowledge support from the ANR-DFG research program Explosys (ANR-14-CE35-0013-01; Gl203/9-1). This work has been done with the support of the Technische Universität München – Institute for Advanced Study, funded by the German Excellence Initiative and the European Union Seventh Framework Programme under grant agreement 291763. Experiments were performed at the Bavarian NMR center at TU München. P. Mardešić acknowledges support from LAISLA (project funded by FONCICYT) and the PREI project (funded by UNAM DGAPA).Author contributionsAll authors contributed to the design and interpretation of the presented work. Numerical computations have been done by L. V. D., the construction of the experiment and the acquisition of the data were performed by D. L. and S. J. G.Additional informationSupplementary information accompanies this paper.Competing financial interests: The authors declare no competing financial interests.Author informationCorrespondence and requests for materials should be addressed to D. S. (email: [email protected]) and S. J. G. (email: [email protected]).
http://arxiv.org/abs/1706.08273v1
{ "authors": [ "L. Van Damme", "D. Leiner", "P. Mardesic", "S. J. Glaser", "D. Sugny" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170626082147", "title": "Linking the rotation of a rigid body to the Schrödinger equation: The quantum tennis racket effect and beyond" }
@nat@width>@nat@width kframe@end@of@kframe @end@of@kframe ##1totalleftmargin -##1---totalleftmargin -totalleftmargin@ setminipage@end@of@kframeP E R N aA bB cC dD eE fF gG hH iI jJ kK lL mM nN oO pP qQ rR sS tT uU vV wW xX yY zZ 𝒜ℬ𝒞𝒟ℰℱ𝒢ℋℐ𝒥𝒦ℒℳ𝒩𝒪𝒫𝒬ℛ𝒮𝒯𝒰𝒱𝒲𝒳𝒴𝒵â b̂B̂ ĉĈ d̂D̂ êÊ f̂F̂ ĝĜ ĥĤ îÎ ĵĴ k̂K̂ l̂L̂ m̂M̂ n̂N̂ ôÔ p̂P̂ q̂Q̂ r̂R̂ ŝŜ t̂T̂ ûÛ v̂V̂ ŵŴ x̂X̂ ŷŶ ẑẐ ãà b̃B̃ c̃C̃ d̃D̃ ẽẼ f̃F̃ g̃G̃ h̃H̃ ĩĨ j̃J̃ k̃K̃ l̃L̃ m̃M̃ ñÑ õÕ p̃P̃ q̃Q̃ r̃R̃ s̃S̃ t̃T̃ ũŨ ṽṼ w̃W̃ x̃X̃ ỹỸ z̃Z̃α β γ δ ϵ ε ζ η θ ϑ ι κ λ μ ν ξ π ϖ ρ ϱ σ ς τ υ ϕ φ ψ χ ωα̂ β̂ γ̂ δ̂ ϵ̂ ε̂ ζ̂ η̂ θ̂ ϑ̂ ι̂ κ̂ λ̂ μ̂ ν̂ ξ̂ π̂ ϖ̂ ρ̂ ϱ̂ σ̂ ς̂ τ̂ υ̂ ϕ̂ φ̂ ψ̂ χ̂ ω̂α̃ β̃ γ̃ δ̃ ϵ̃ ε̃ ζ̃ η̃ θ̃ ϑ̃ ι̃ κ̃ λ̃ μ̃ ν̃ ξ̃ π̃ ϖ̃ ρ̃ ϱ̃ σ̃ ς̃ τ̃ υ̃ ϕ̃ φ̃ ψ̃ χ̃ ω̃Γ Δ Θ Λ Ξ Π Σ Υ Φ Ψ ΩΓ̂ Δ̂ Θ̂ Λ̂ Ξ̂ Π̂ Σ̂ Υ̂ Φ̂ Ψ̂ Ω̂0 1 lemLemma thmTheorem coroCorollary defnDefinition remarkRemark Hothorn Top-Down Transformation ChoiceTop-Down Transformation Choice Simple models are preferred over complex models, but over-simplistic models could lead to erroneous interpretations.The classical approach is to start with a simple model, whose shortcomings are assessed in residual-based model diagnostics.Eventually, one increases the complexity of this initial overly simple model and obtains a better-fitting model.I illustrate how transformation analysis can be used as an alternative approach to model choice.Instead of adding complexity to simple models, step-wise complexity reduction is used to help identify simpler and better-interpretable models.As an example, body mass index distributions in Switzerland are modelled by means of transformation models to understand the impact of sex, age, smoking and other lifestyle factors on a person’s body mass index.In this process, I searched for a compromise between model fit and model interpretability.Special emphasis is given to the understanding of the connections between transformation models of increasing complexity.The models used in this analysis ranged from evergreens, such as the normal linear regression model with constant variance, to novel models with extremely flexible conditional distribution functions, such as transformation trees and transformation forests.Transformation analysis, conditional transformation model,conditional distribution function, conditional quantile function,distribution regression, stratified linear transformation model, body mass index Transformation analysis, conditional transformation model,conditional distribution function, conditional quantile function,distribution regression, stratified linear transformation model, body mass index Torsten HothornInstitut für Epidemiologie, Biostatistik und Prävention Universität Zürich Hirschengraben 84, CH-8001 Zürich, Switzerlandupquote.sty § INTRODUCTION Let's face it.The work of statisticians is considered boring in the public eye.Nobody publishes page turners on the thrilling aspects of data analysis, yet the quest for a good model can be as exciting as detective work.One of my favourite paperback characters is LAPD detective Harry Bosch in the crime novels of Michael Connelly. Like Harry, who follows the traces left by the murderer on the crime scene to form a theory about the culprit, the experienced data analyst follows the traces left by the data-generating process in the residuals of an over-simplistic model.Unlike Harry, who of course always succeeds in arresting the murderer, the statistician can never be sure whether the correct or even an approximately useful model was found.In the quests for a suspect or for a good model, parsimonious explanations are preferred by Occam's razor.Therefore, in residual-based model diagnostics, the data analyst starts with a very simple model, whose complexity is increased by step-wise refinement until all signs of lack of fit disappear from the residuals.I refer to such a procedure as “bottom-up model choice” because one moves from simple to more complex models.In this tutorial, I consider moving in the opposite direction, from complex to simple models, for distributional regression.This “top-down approach” to model choice begins with the mostcomplex model that one can come up with that explains both signal and noise without overfitting the data. In a regression setup, such a model would describe as accurately as possible the conditional distribution of the response given the explanatory variables. Once such a model is established as a benchmark for comparison with simpler models, one can start to reduce model complexity step-wise.In the crime novel scenario, the top-down data analyst takes the role of an eyewitness at the scene.What one “sees” in this process is, of course, still a portrayal and not the real thing.There is no way to “see” the correct model.In top-down model choice, however, the trajectories through model space will be guided by assessments of vital models.In bottom-up model choice, by contrast, the horizon is limited by the amount of information that one can find in traces in deceased models.In this tutorial, I focus on top-down model choice in continuous regression problems.Conceptually, a regression model is a family of conditional distributions for some responsegiven a specific configuration of explanatory variables =.The model describes both signal and noise, the variability explained by the explanatory variables and the unexplained variability.Unfortunately, this point of view only applies to relatively simple models that assume a certain parametric distribution, whose parameters partially depend on the explanatory variables.So-called “non-parametric regression models” <cit.>often restrict their attention to the signal (| = ) = m(), with non-linear conditional mean function m, while treating the noise, all higher moments of the conditional distribution, as a nuisance or essentially ignoring it.Such procedures, for example random forests <cit.>, are extremely powerful when estimating complex conditional mean functions. However, one cannot infer the entire conditional distribution using random forests or similar methods.This renders top-down model choice impossible because reductions in complexity require switching between different model classes or even crossing the borders between the parametric and non-parametric empires.The comparison of two models from different classes is difficult, and thus it is difficult to decide whether the simpler model is more appropriate than the more complex one.The implementation of top-down model choice is much simpler when the most complex and the most simple model are members of the same family.Conditional transformation models from the transformation family of distributions <cit.> include many important established off-the-shelf regression models.In addition, tailored models can be created, in vivo with our brainsand in silico using open-source software, which allow smooth transitions between models of different complexity.In a nutshell, the class of conditional transformation models(≤| = ) = (|) = F((|)) assumes that the conditional distribution functionofgiven = can be written as the composition of an specified continuous cumulative distribution function F and some conditional transformation function (|).The latter function monotonically increases in its first argument for each . It is important to note that the entire conditional distribution, and not just its mean, is modelled by . In this sense, and unlike common regression models, there is no decomposition intosignal (the conditional mean) and noise (the remaining higher moments) in this class of transformation models. Changing model complexity means changing the complexity of the conditional transformation function , and I thus refer to top-down model choice in conditional transformation models as top-down transformation choice.Model complexity in the class of conditional transformation models is linked to smooths (|) of varying complexity with respect to .These conditional transformation functions may vary with the explanatory variablesin arbitrary ways, including interactions and non-linearities.In this paper, I consider model choice as an art rather than an exact science.No formal algorithm leading to an “optimal” model will be presented.Instead, I argue that the possibility of modelling a cascade of decreasingly complex conditional distribution functions in the same model class gives us new possibilities to investigate goodness of fit or lack thereof.A fair amount of subjectivity will remain in this process, as is always the case in classical model diagnostics.I shall be less concerned with the technical subtleness of parameter estimation in the models discussed here and refer the reader to more formal results published elsewhere.Instead, I illustrate practical aspects of top-down transformation choice by a tour-de-force through transformation models describing the impact of lifestyle parameters, such as smoking or physical activity, on the body mass index (BMI) distribution in the Swiss population.I will proceed by introducing the Swiss Health Survey and the variablesdealt with in Section <ref>.In a very simple setup, I first illustrate a bottom-up route, starting with a normal linear model and ending with a more complex non-normal transformation model, for describing the BMI distribution of females and males at various levels of smoking (Section <ref>).I then try to reduce the complexity again until an interpretable model that fits the data roughly as well as the most complex model can be found.In addition to a consideration of sex and smoking, I consider age and some lifestyle variables in a more realistic setup of top-down transformation choice in Section <ref>. § BODY MASS INDEX IN THE SWISS HEALTH SURVEYThe Swiss Health Survey (SHS) is a population-based cross-sectional survey.It has been conducted every five years since 1992 by the Swiss Federal Statistical Office <cit.>. For this tutorial, I restricted the sample to 16,427 individuals aged between 18 and 74 years from the 2012 survey. Study samples were obtained by stratified random sampling using a database with all private household landline telephone numbers.Data were collected by telephone interviews and self-administered questionnaires. Height and weight were self-reported in telephone interviews.Observationswith extreme values of height and weight were excluded (highest and lowest percentile by sex).Smoking status was categorised into never, former, light (1-9 cigarettes per day), moderate (10-19) and heavy smokers (>19).Never smokers stated that they did not currently smoke and never regularly smoked longer than six months; former smokers had quit smokingbut have smoked for more than six months during their life course.One cigarilloor pipe counted as two cigarettes, and one cigar counted as four cigarettes.Thefollowing lifestyle variables were includedand assessed by telephone interview and self-administered questionnaire:fruit and vegetable consumption, physicalactivity, alcohol intake, level of education, nationality and place of residence. Fruit and vegetable consumption was combined in one binary variable that comprised the information on whether both fruits and vegetables were consumed daily or not.The variable describing physical activity was defined as the number of days per week a subject started to sweat during leisure time physical activity and was categorised as >2 days, 1-2 days and none.Alcohol intake was included using the continuous variable gram per day.Education was included as highest degree obtained and was categorised into mandatory (International Standard Classification of Education, ISCED 1-2), secondary (ISCED 3-4), and tertiary (ISCED 5-8) <cit.>.Nationality had the two categories Swiss and foreign.Language reflected cultural and regional differences within Switzerland, and the three categories German/Romansh, French and Italian were taken into account. Sampling weights of this representative survey were considered for the estimation of all models reported in this tutorial. More detailed information about this study and an analysis using simple transformation models is given in <cit.>.§ SEX- AND SMOKING-SPECIFIC BMI DISTRIBUTIONSI start with the very simple situation where the conditional distribution of BMI depends on sex and smoking only.Smoking was assessed on five different levels (never smoked, former smokers, light smokers, medium smokers and heavy smokers). Therefore, I am interested in the conditional distribution of BMI in these 10 groups of participants.Figure <ref> presents the empirical cumulative distribution functions, the non-parametric maximum-likelihood estimators for the underlying continuous distributions, for each of the 10 combinations of sex and smoking.At the same time, the plot also represents the uncompressed raw data.With a high-enough resolution, one could recover the original BMI values and the corresponding sampling weights from such an image.Consequently, goodness of fit can be assessed by overlaying the empirical cumulative distribution functions with their model-based counterparts in this simple setup.I will try to find a suitable parametric model this way.In addition to this rather informal approach, I will study the increase of the log-likelihoods as model complexity is increased. In the classical bottom-up approach, one would start with a very simple model assuming conditional normal distributions.The next section discusses possible choices in this model class. §.§ Normal Models The normal cell-means model with constant variance|, ∼(μ(:), σ^2)assumes normal distributions with a common variance for all conditional BMI distributions.Means are allowed to vary between the groups defined by sex and smoking.The notation μ(:) indicates that the conditional mean is specific to each combination of sex and smoking in this cell-means model, there are a total of 10 parameters μ(:). With a residual standard error of σ̂ = 3.73, a log-likelihood of -47010.46 was obtained, and the estimated cell-means, with 95% confidence intervals, are shown in Table <ref>. How well does this model fit the data? I want to answer this question by graphically comparing the conditional distribution functions obtained from this model to the corresponding empirical conditional distributions, and thus the raw data. The model-based conditional cumulative distribution functions(≤|, ) = (|, ) = Φ( - μ(:)/σ)overlay the empirical cumulative distribution functions in Figure <ref>.While not being completely out of line, the considerable differences between the empirical and model-based distribution functions certainly leave room for improvement. An obvious increase in the complexity allows for group-specific variances in the model|, ∼(μ(:), σ(:)^2). The log-likelihood in this 20-parameter model increased to -44801.19, and the corresponding conditional distribution functions in Figure <ref> were closer to the empirical cumulative distribution functions.For males, the model-based normal distributions were very close to the empirical conditional BMI distributions.For females, however, there still was a considerable discrepancy between model and data, especially in the lower tails.The BMI distributions of females deviated from normality much more than the BMI distributions of males (note that I am not saying that males are normal and females are not!).It is clear that one has to move to a non-normal error model, at least for females, and the transformation models discussed below are a convenient way to do so.The normal models are a special case of transformation models and thus the latter class is a very natural extension of the former.To see the connection, consider the conditional distribution function(≤|, ) =Φ( - μ(:)/σ(:))=Φ((:)- (:))=Φ((|:)),whereis a linear function ofwith parameters (:) = σ(:)^-1 and (:) = μ(:) / σ(:).In more complex models, we will use the parameter(or parametervectorfor basis-transformed response values ())for modelling transformationsof the response . Shift parameters describing effects of explanatory variables only, no response-varying effects, will be denoted by βorlater on. The above re-parameterisation shows,quite unsurprisingly, that a normal model features a linear transformation function .Consequently, non-normal conditional distributions can be obtained by allowing a non-linear transformation function for each combination of sex and smoking in the transformationmodels presented in the next section. §.§ Non-normal Transformation Models The core concept of a transformation model is a potentially non-linear monotonically increasing transformation function (|:), here for each combination of sex and smoking.For computational convenience, I parameterised the transformation functionsin terms ofBernstein polynomials <cit.>.For each of the 10 groups, I modelled the transformation functionby a Bernstein polynomial 5()^⊤ of order 5, where 5() ∈^6are the corresponding basis functions of BMI.A monotonically increasing Bernstein polynomial of order 5 features 6 monotonically increasing parameters . Maximum-likelihood estimation was performed <cit.> using the mlt<cit.> add-on package to theR system for statistical computing <cit.>. With the corresponding 60 total parameters, the maximum log-likelihood of the model(≤|, ) = Φ(5()^⊤(:))was -43564.30; the notation (:) indicates that the parameters were estimated for each combination of sex and smoking. One can hardly differentiate the resulting model-based conditional distribution functionsfrom the empirical cumulative BMI distribution functions in Figure <ref>.Because a separate transformation function was estimated for each combination of sex and smoking, this model can be referred to as a transformation model stratified by sex and smoking.Based on this model, one can understand non-normality as deviation of the transformation function from a linear function.Figure <ref> shows the sex- and smoking-specific transformation functions of model (<ref>) along with the linear transformation functions obtained from the normal cell-means model (<ref>) with heterogeneous variances.In the centre of the distributions, the two curves overlap, but the tails are not described well by the normal distribution.The differences between the two curves are more pronounced for females, corresponding to the larger deviations from normality observed earlier. One nice feature of model (<ref>) is the possibility to easily derive characterisations of the distribution other than the distribution function. Density, quantile, hazard, cumulative hazard or other characterising functions can be derived from (<ref>), and Figure <ref> depicts the densities for males and females atthe various levels of smoking. The right-skewness of the distribution, and thus deviation from normality, was more pronounced for females. The BMI distributions for females put more weight on smaller BMI values for females than for males. Except for heavy smokers, the effects of smoking seemed to be rather small.The model fit of this stratified transformation model is now satisfactory, as it essentially smoothly interpolates the empirical distribution functions and thus the data in Figure <ref>.This most complex model describes the data well, but unfortunately, it is difficult to learn anything from this model.That is, one wants to understand the differences between the conditional distributions in terms of simple parameters and not complex non-linear functions.A simpler model is needed.A top-down approach to transformation choice might help to identify a model with simpler and interpretable transformation functions, but any necessary compromises to the model fit should not be too demanding.Because the BMI distributions differed most between males and females, I first simplify the model by conditioning on smoking and stratifying by sex, I introduce sex-specific transformations (|) and sex-specific smoking effects , the latter being constant for all argumentsof the conditional distribution function, in the model(≤|, ) = Φ((|) - (:)).This model features two transformation functions (|male) and (|female). For never smokers (the reference category), these two transformation functions describe the conditional BMI distributions,(≤|, never smoked) = Φ((|)).For the remaining smoking categories, one sex-specific parameter (:) describes how the conditional BMI distribution of a smoker differs from the conditional BMI distribution of a person who never smoked by a simple shift term . Because of the “linear” shift term, this model could be referred to as a stratified linear transformation model. This is, as often in statistics, a misnomer, because the transformation (|) of the response, of the BMI values , is non-linear.The log-likelihood for this model with 20 parameters was found to be -43602.03, a moderate reduction compared to the log-likelihood of the most complex transformation model (-43564.30).Figure <ref> shows only minor differences between the empirical and model-based conditional distribution functions.Thus, it seems that a more parsimonious model was found without paying too high a price in terms of log-likelihood reduction.The conceptual problem with this model, however, is lack of interpretability of the shift term (:). In contrast to the means μ in the normal models (<ref>) and (<ref>), there is no direct interpretation ofin terms of moments of the conditional distribution described by this model. This issue can be addressed by changing the cumulative distribution function F = Φ of the standard normal to the cumulative distribution function F = of the standard logistic(≤|, ) = ((|) - (:)).When the cut-offis fixed, this is a logistic regression model for the binary outcome ≤ vs. >. The transformation function (|) is now a sex-specific intercept, and (:) are the sex-specific log-odds ratios for the event ≤ compared to the baseline category (never smokers). Because this shift term does not depend on , the model assumes proportionality of the smoking odds with respect to the cut-off . Stratification by sex allows non-proportional smoking odds with respect to sex. In fact, the sex-specific conditional distributions of males and females can still differ in very general ways because two separate Bernstein polynomials (|) = 5^⊤() describe the conditional distributions for males and females. The model can be seen as a stratified proportional odds model for continuous responses or a continuous form of logistic regression analyses, jointly performed for all possible cut-off pointsunder the assumption of constantparameters (:). Similar models, however without stratification, were studied by <cit.> using parametric intercept functionsand recently by <cit.> treating the intercept function as a nuisance parameter in non-parametric maximum likelihood estimation. <cit.> provide a comparison of parameter estimation in the presence of interval-censored body mass index observations.The parameterisation (|) - (:) with a negative shift term seems unconventional from a logistic regression point of view, but it simplifies interpretation.With model (<ref>), ((|, )) = (:), and thus positive shift parametersindicate a shift of the BMI distribution towards higher BMI values. Corresponding odds ratios exp() larger than one mean that BMI distributions are shifted to the right, compared to the BMI distribution in the reference category.Unfortunately, there was some further reduction in the log-likelihood (-43639.74), and interpretability doesn't come for free. However, the model-based and empirical conditional BMI distribution functions look very much the same as presented in Figure <ref> (additional plot not shown). The sex-specific BMI-independent odds-ratios of smoking, compared to never smoking, are given in Table <ref>. Former smokers had, on average, a larger BMI compared to never smokers, and the effect was stronger for males.A similar effect was observed for male heavy smokers. Female lightsmokers showed a BMI distribution shifted to the left, compared with female never smokers.Maintaining interpretability, one could go further and assume equal smoking effects for males and females in the model(≤|, ) = ((|) - ()). The log-likelihood was again reduced (-43669.50) for this model with 16 parameters. In addition, the odds-ratios presented in Table <ref> indicate severe differences in the smoking effects between males and females; therefore, I refrain from looking at this or even simpler models and stop the top-down transformation choice here. Of course, this very simple example only worked because it was possible to compare models and raw data directly on the scale of the conditional BMI distribution functions for two categorical explanatory variables, sex and smoking. In the second part, I will consider additional, and also numeric, explanatory variables in a more realistic setup.§ CONDITIONAL BMI DISTRIBUTIONSMy aim is to estimate the conditional BMI distribution given sex, smoking, age and the lifestyle variables alcohol intake, education, physical activity, fruit and vegetables consumption, residence and nationality as explanatory variables .In the conditional transformation model(≤|, , , ) = ((|, , , )),the conditional transformation functiondepends on these variables in a yet unspecified way. Top-down transformation choice ideally allows one tostart without too many headaches, an algorithm for fitting this model to handle the potentially many explanatory variables of mixed type allows relatively complex non-linear transformation functions. Such a model can be written as(≤|, , , ) = (5()^⊤(, , , )),assuming that each conditional distribution is parameterised in terms of a Bernstein polynomial of order 5. The parametersof this polynomial, however, depend on the explanatory variables in a potentially complex way, featuring interactions and non-linearities.Tree and forest algorithms <cit.> allow such “conditional parameter functions” (, , , ), and thus the corresponding conditional BMI distributions, to be estimated in a black-box manner without the necessity to specifyany structure of (, , , ). I will first use trees and forests to understand the complexity of the impact of the explanatory variables on the BMI distribution. Later on, I will apply a top-down approach to transformation choice to obtain simpler transformation models that allow more straightforward interpretation. §.§ Transformation Trees and Forests A transformation tree <cit.> starts with an unconditional transformation model(≤) = (5()^⊤)and a corresponding maximum-likelihood estimator . The algorithm proceeds by assessing correlations betweenthe score contributions evaluated at and the explanatory variables sex, smoking, age and . A binary split is implemented in the most discriminating cut-off point of the variable showing the highest correlation to any score.The procedure is repeated until a certain stop criterion applies.The result is a partition of the data. The algorithm is sensitive to distributional changes, the conditional BMI distributions in the subgroups of this partition may vary with respect to the mean BMI and also with respect to higher BMI moments.In each subgroup, the unconditional model (<ref>) was used to estimate (, , , ) for this subgroup.Because each observation in this subgroup is then associated with a dedi­cated parameter vector (, , , ), the log-likelihood for the tree model could be evaluated as the sum of the likelihoods in the subgroups.The log-likelihood of the tree presented in Figure <ref> is -43079.42.The first split is in sex, so in fact two sex-specific models are given here.Three age groups (≤ 34, (34, 51], > 51) for females and three age groups (≤ 25, (25, 36], > 36) for males are distinguished.Education contributed to understanding the BMI distribution of females and males.Location, scale and shape of the conditional BMI distributions varied considerably.The variance increased with age, and higher-educated people tended to have lower BMI values.These are interesting insights, but the model is of course very rough.< g r a p h i c s > Transformation tree. The conditional BMI distributions (depicted interms of their densities) are given in each subgroup correspondingto the terminal nodes of the tree. Variables: education (edu) atlevels mandatory (I), secondary (II) and tertiary (III); alcohol intake (agramtag). A transformation forest <cit.> allows less rough conditional parameter functions (, , , ) to be estimated. There are no longer any restrictions regarding the conditional parameter functions. In this sense, a transformation forest is the “most complex model one can think of” as mentioned in the introduction.The random forest class of models is consideredto be very accurate, insensitive to hyperparameter tuning and without a tendency to overfit. In the following, I shall use this method to obtain a benchmark for better-interpretable transformation models following the top-down model selection approach.The generic random forest algorithm essentially relies onmultiple transformation trees fitted to subsamples of the data, with a random selection of variables to be considered for splitting in each node. Unlike the original random forest <cit.>, a transformation model can be understood as a procedure assigning a parametric model to each observation. For subject i, the forest conditional distribution function is(≤|_i, _i, _i, _i) = (5()^⊤(_i, _i, _i, _i)).In this sense, a transformation forest “predicts” a fully parametric model for each subject, albeit with a very flexible conditional parameter. The conditional parameter (_i, _i, _i, _i) was obtained from a locally adaptive maximum-likelihood estimator based on so-called nearest neighbour weights <cit.>.A considerable improvement in the transformation forest log-likelihood (-42520.18) was observed.In fact, this is the largest log-likelihood I was able to achieve. Thus, this transformation forest is the best- fitting model for the BMI data.On the downside, this black-box model makes is very difficult to understand the impact of the explanatory variables on the conditional BMI distribution.The likelihood-based permutation variable importance (Figure <ref>) indicated that only sex, age, education, physical activity and smoking have an impact on BMI, where again sex seems to be the most important variable.Age was a more important factor than education or physical activity, and thus the only numeric variable one needs to consider. The association between sex, smoking, age and BMI as described by the transformation forest is given in terms of a partial dependency plot of conditional deciles in Figure <ref>.In general, the median BMI increases with age, as does the BMI variance.For males, there seemed to be a level-effect whose onset depends on smoking category.Females tended to higher BMI values, and the variance was larger compared to males.There seemed to be a bump in BMI values forfemales, roughly around 30 years. This corresponds to mothers giving birth to their first child around this age.It is important to note that the right-skewness of the conditional BMI distributions in Figure <ref> rendersconditional normal distributions inappropriate, even under variance heterogeneity.This complex model would be sufficient if one was only interested in the estimation of conditional BMI distributions for persons with specific configurations of the sex, smoking, age and the remaining explanatory variables. The variable importances can be used to rank variables according to their impact on the conditional BMI distributions but cannot replace effect measures, let alone an assessment of their variability. Communication with subject-matter scientists and publication of results in subject-matter journals requires simplification of these models. Top-down transformation choice can help to find models of appropriate complexity, as will be seen in the next section. §.§ Conditional Transformation Model The analysis using transformation trees and especially transformation forests revealed strong effects of sex and age; the latter variable was not considered in our analysis presented in Section <ref>. A more structured model roughly aspowerful as the transformation forest must therefore allow the conditional distribution of BMI to change with both sex and age invery general ways. The remaining variables were less important, and one can hopefully cut some corners here by assuming simple linear main effects for these variables. I start the top-down search for a simpler model with a conditional transformation model of the form(≤|, , , ) =((, |) -(:) - ^⊤).The transformation function (, |) implements a sex-specific bivariate smooth-surface function of BMI and , which was of coursemonotonic in its first argument. The surface function for males explains age-induced changes in the conditional distribution of BMI. In contrast to transformation forests, the assumption was made that the function is smooth in bothandand not only in .I parameterise this function as a tensor product of two Bernstein polynomials of order 5, one for BMI and one for age, with sex-specific 36-dimensional parameter vector (), in other words as(5() ⊗5())^⊤(). Except for , the remaining variables entered only as the linear shift term ^⊤ of main effects. In light of its fifth rank in the permutation variable importance (Figure <ref>), it may seem a bit inconsequent to treat differently than the other variables. However, the stratified analysis in Section <ref> suggested the need for sex-specific smoking effects, and I thus include the interaction term (:) also in this model. Thefunction around the transformation function ensures interpretability ofall regression coefficientson the log-odds scale.With 89 parameters, the log-likelihood -42778.14 of model (<ref>) was only slightly smaller than the log-likelihood of the transformation forest (-42520.18).In a certain sense, this conditional transformation model can be seen as an approximation of the black-box transformation forest.The effects of sex, smoking and age, with all remaining variables being constant, are again best visualised using the conditional decile functions (Figure <ref>).The decile functions are now smooth in age due to the parameterisation of the age effect in terms of Bernstein polynomials.For males, the BMI increased with age; the BMI reduction in males older than 65 years was not visible in the decile curves of the transformation forests (Figure <ref>).The slope was largest for young men up to 25 years, followed by a linear increase until the age of 65.The male BMI distribution was right skewed, with only a small increase in the variance towards older people.For females, a bump in the BMI distribution was again identified around the age of 30, corresponding to pregnancies and breast-feeding times.The effect seemed more pronounced in higher deciles.Right skewness and a variance increase towards older women can be inferred from this figure.The main advantage of this complexity reduction is the interpretability of the regression coefficients (:) andin terms of BMI-independent log-odds ratios. The sex-specific smoking effects and the effects of the remaining variables as odds ratios are given in the left column of Table <ref>. Further simplification can be achieved by replacing the bivariate surface function ofand age by a sex-specific, BMI-varying linear effect of age in the distribution regression model presented in the next section. §.§ Distribution Regression The term “distribution regression” <cit.> is commonlyused to describe response-varying coefficients. In survival analysis, the term “time-varying coefficients” is more typical. Here, a BMI-varying coefficient of age is a means of simplifying the conditional transformation model (<ref>). In the simpler model, I assume a smoothly varying but sex-specificcoefficient of age ( | ). The transformation function (|) is again the simple transformation of BMI given sex introduced in model (<ref>). The model reads(≤|, , , ) =((|) - ( |) -(:) - ^⊤). The log-likelihood -42888.10 decreased considerably in this model with 41 parameters. The effects of smoking and the remaining variables (except age) are given in the middle column of Table <ref> as odds ratios. When the dependency of BMI deciles on sex, smoking and age were depicted (Figure <ref>), the linear structure regarding age was obvious. The age-varying slopes and the pregnancy bump could not be identified by this simpler model. Right-skewness and variance heterogeneity for females remained visible. The variance increase in older males now seemed questionable. For my taste, the replacement of two bivariate functions by two univariate functions does not really help model interpretation, as one would have to plot these two functions in any case. The severe reduction of the log-likelihood indicated that the effect of age is better described in a conditional transformation model of the form (<ref>). Nevertheless, I will go one step further and connect the stratifiedlinear transformation model (<ref>) with a model of the same formfeaturing age and the lifestyle variablesin addition to sex and smoking. §.§ Stratified Linear Transformation Model I extend the stratified linear transformation model (<ref>) with an sex-specific age effect and a linear predictor ^⊤ of the remaining variables(≤|, , , ) =((|) -(:) - (:) - ^⊤) The log-likelihood was further reduced to -42900.36.In this model, the sex differences in the age effects were completely gone, as the odds ratios for a one-year increase were 1.03 (1.03-1.04) for males and 1.03 (1.03-1.04) for females.In light of the more complex structure of the age effect identified by the more complex models, one would incorrectly draw the conclusion of equal age effects for males and females based on this oversimplified model.The effects of the remaining parameters are given in the right column of Table <ref>.The three columns presented in Table <ref> refer to the same parameters, estimated by three models differing only with respect to the complexity of the age effect.The effects of smoking, alcohol intake, education, physical activity, fruit and vegetables consumption, residence and nationality were remarkably constant.Alcohol intake had no impact on the BMI in this study, and right shifts in BMI distributions were associated with low fruit and vegetable consumption, moderate and low physical activity, short education, being a foreigner or living in theGerman-speaking part of Switzerland.These conclusions can be drawn from all three models in the same way.The effects of smoking were less pronounced than the effects obtained in our initial analysis that ignored age and the lifestyle variables (Table <ref>).Light smokers had lower BMIs than never smokers; the remaining effects are questionable.§ DISCUSSION The core of top-down transformation choice is a family of decreasingly complex, yet fully comparable, conditional transformation models.Model parameterisation and interpretation in the family of transformation models are alwaysbased on the conditional distribution function(|, , , ) = F((≤|, , , )).Unlike most classical models featuring explicit parameters for conditional means or conditional variances, transformation models describe conditional distributions explicitly and moments implicitly. What might seem as a disadvantage is in fact, as I hope I could convince the readers of, a very attractive feature of transformation models for regression analysis. In this tutorial, I exclusively defined and interpreted models for conditional distributions. The corresponding distribution functions were used tocompare transformation models with the empirical cumulative distribution function (Figures <ref>, <ref>, <ref> and <ref>). The conditional transformation functionwas used to assess deviations from normality in Figure <ref>. Conditional densities(|, , , ) = ^'(≤|, , , )are depicted in Figure <ref> and those for each terminal node of the transformation tree are shown in Figure <ref>. Densities defined the log-likelihood∑_i = 1^16,427 w_i log((_i |_i, _i, _i, _i))based on all 16,427 BMI measurements _i with sampling weights w_i. Conditional quantile functions(p |, , , ) = ^-1(p |, , , )helped to visualise age effects in Figures <ref>, <ref> and <ref>. Effect measures for sex, smoking and lifestyle variables in Tables <ref> and <ref> were obtained as ratios of conditional odds functions(|, , , ) = (|, , , )/1 - (|, , , ).Varying model complexity only affects the flexibility of these functions that characterise conditional distributions, but not the corresponding interpretations.A unique feature of conditional transformation models is the ability to formulate, estimate, compare, evaluate, interpret and understand models seemingly as far apart as a normal linear model with constant variance and a transformation forest in the same theoretical framework.Straightforward answers to some questions that have plagued data analysis for decades, for example “Is it appropriate to assume normal errors?” or “How should the response be transformed prior to analysis?”, are easily obtained from conditional transformation models.One practical and interesting question relates to the impact of the order M of the Bernstein polynomial M()^⊤.The choice M = 1 implements a linear function, and with F = Φ, conditional normal distributions are obtained. For M →∞, M()^⊤ converges uniformly to the true and unknown transformation function () in a model (≤) = F(()).Becauseis a monotonic function, too-erratic behaviour cannot occur, even for very large M, and overfitting is not an issue <cit.>.In the model (<ref>), increasing the order from M = 5 to M = 10 led to a very small increase in the log-likelihood from -43564.30 to -43547.16. In the extreme case of very large M, the conditional distribution function F(M()^⊤) closely interpolates the empirical cumulative distribution function. The latter estimator is consistent, as is the transformation model <cit.>.This tutorial did not address any issue regarding model estimation or model inference.Details about maximum-likelihood estimation in conditional transformation models can be found in <cit.>.Locally adaptive maximum-likelihood estimation for transformation trees and transformation forests has been introduced in <cit.>. More elaborate discussions of model parameterisation in conditional transformation models and of connections to other models can be found in<cit.> and <cit.>. Applications of conditional transformation models can be found in<cit.>, <cit.> and <cit.>. An introduction to the mltadd-on package <cit.> for maximum-likelihood estimation in conditional transformation models, including models for ordinal or censored and truncated responses, is available in <cit.>.§ REPRODUCIBILITY Data from the Swiss Health Survey 2012 can be obtained from the Swiss Federal Statistics Office (Email: <[email protected]>).Data is available for scientific research projects, and a data protection application form must be submitted.More information can be found here <http://www.bfs.admin.ch/bfs/de/home/statistiken/gesundheit/erhebungenSupplementary>.The code used for producing the results presented in this paper can be evaluated on a smaller artificial data set sampled from the transformation forest by runningfrom the trtf package <cit.>.§ ACKNOWLEDGEMENTS I thank the students participating in the course “STA660 Advanced R Programming” that I taught in the spring semester of 2017 for producing the code underlying Figure <ref> as part of their homework assignments.Parts of this paper were written during a research sabbatical at Universität Innsbruck financially supported by the Swiss National Science Foundation (grant number IZSEZ0_177091).
http://arxiv.org/abs/1706.08269v2
{ "authors": [ "Torsten Hothorn" ], "categories": [ "stat.ME", "stat.ML" ], "primary_category": "stat.ME", "published": "20170626080801", "title": "Top-down Transformation Choice" }
Testing baryon-induced core formation in ΛCDMSterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281, B-9000 Gent, [email protected] Department of Physics and Astrophysics, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, BelgiumRecent cosmological hydrodynamical simulations suggest that baryonic processes, and in particular supernova feedback following bursts of star formation, can alter the structure of dark matter haloes and transform primordial cusps into shallower cores. To assess whether this mechanism offers a solution to the long-standing cusp-core controversy, simulated haloes must be compared to real dark matter haloes inferred from galaxy rotation curves. For this purpose, two new dark matter density profiles were recently derived from simulations of galaxies in complementary mass ranges: the DC14 halo (10^10 < M_halo/M_⊙ < 8 × 10^11) and the coreNFW halo (10^7 < M_halo/M_⊙ < 10^9). Both models have individually been found to give good fits to observed rotation curves. For the DC14 model, however, the agreement of the predicted halo properties with cosmological scaling relations was confirmed by one study, but strongly refuted by another.A next important question is whether, despite their different approaches, the two models converge to the same solution in the mass range where both should be appropriate. To investigate this, we tested the DC14 and coreNFW halo models on the rotation curves of a selection of galaxies with halo masses in the range 4 × 10^9 M_⊙ - 7 × 10^10 M_⊙ and compared their predictions. We further applied the DC14 model to a set of rotation curves at higher halo masses, up to 9 × 10^11 M_⊙, to verify the agreement with the cosmological scaling relations.Both models are generally able to reproduce the observed rotation curves, in line with earlier results, and the predicted dark matter haloes are consistent with the cosmological c-M_halo and M_*-M_halo relations. We find that the DC14 and coreNFW models are also in fairly good agreement with each other, even though DC14 tends to predict slightly less extended cores and somewhat more concentrated haloes than coreNFW. While the quality of the fits is generally similar for both halo models, DC14 does perform significantly better than coreNFW for three galaxies. In each of these cases, the problem for coreNFW is related to connection of the core size to the stellar half-mass radius, although we argue that it is justifiable to relax this connection for NGC 3741. A larger core radius brings the coreNFW model for this galaxy in good agreement with the data and the DC14 model. Testing baryon-induced core formation in ΛCDM: A comparison of the DC14 and coreNFW dark matter halo models on galaxy rotation curves F. Allaert1 G. Gentile2 M. Baes1=====================================================================================================================================§ INTRODUCTION For several decades the dark matter problem has been one of the main topics in astronomical research. Theidea of missing or invisible mass was already proposed in the early 1930s by Jan Oort <cit.> and Fritz Zwicky <cit.> based on their observations of the motions of stars in the Milky Way disk and galaxies in the Coma cluster. Despite this early notion, the first sound evidence of the presence of dark matter only came in the 1970s, from the analysis of galaxy rotation curves by<cit.>, <cit.>, and <cit.>. These authors found that the rotation curves of massive galaxies remain flat even at large galactocentric distances and well beyond the stellar disks. This could not be explained by the Newtonian gravity of the visible matter alone, but instead implied an additional extended halo of invisible matter. Furthermore it was found that the rotation curves of low mass and low surface brightness (LSB) galaxies show a slow, almost linear rise in the centre after subtraction of the baryonic contributions. To match this observed behaviour, empirical models of the dark matter distribution in galaxies typically have a central constant-density core. These models, such as the pseudo-isothermal sphere, can explain a wide variety of observed rotation curves <cit.>, although they have no physical basis. On the other hand, dark matter only simulations of the structure formation in the Universe consistently find dark matter haloes with a central cusp <cit.>. The dark matter density profiles derived from these simulations, however, give poor fits to the rotation curves of dwarf galaxies <cit.>. This cusp-core controversy has been one of the major problems of ΛCDM for the past two decades. In the early years rotation curves were often derived in an overly simplistic way from poorly sampled velocity fields or even one-dimensional long-slit observations. Because both physical effects, such as non-circular motions and pressure support, and observational biases, such as beam smearing, often affect the observed kinematics in the central parts of galaxies, there has long been a discussion regarding whether the observed rotation curves are actually reliable and truly trace the gravitational potential of a galaxy <cit.>. Since the 1990s, however, both the quality of the observations and the analysis techniques have vastly improved and although the discussion still persists today <cit.>, there is growing consensus that modern rotation curves accurately trace the total gravitational potential in a galaxy, at least for properly selected systems. Despite these improvements and the substantially higher resolution of present-day dark matter only simulations, the discrepancy still persists. An alternative solution to the cusp-core problem is that baryonic processes associated with galaxy formation and evolution also affect the dark matter halo. Various processes have been proposed in this context with different results. On the one hand, condensation of cooling gas towards the centre of a galaxy causes a further contraction of the dark matter halo and a stronger cusp <cit.>. On the other hand, infalling gas clumps can transfer angular momentum to the dark matter via dynamical friction, ultimately resulting in a shallower central profile <cit.>, but the efficiency of this mechanism is still under debate <cit.>. Finally, feedback from supernovae (and AGN activity in high mass galaxies) can induce massive gas outflows that also cause the dark matter halo to expand. <cit.> already investigated this scenario using a highly simplified outflow model and concluded that supernova feedback could have flattened the dark matter cusps in dwarf irregular galaxies, but is unlikely to be effective in more massive systems. In a more detailed study <cit.> found that repeated bursts of star formation alternated by epochs of gas (re-)accretion can indeed gradually transform dark matter cusps into cores in simulated dwarf galaxies. This result was later confirmed and extended to somewhat higher mass galaxies by numerous studies <cit.>, although the details of the conclusions sometimes differ. For example, while <cit.> found that supernova feedback can only expand dark matter haloes in galaxies with M_*≳ 10^7 M_⊙, <cit.> concluded that cores also form in lower mass systems if star formation proceeds for long enough. Hydrodynamical simulations therefore seem to suggest that stellar feedback effects can solve the cusp-core controversy. To really confirm this claim, however, the simulated haloes must be compared to real observed rotation curves. For this purpose, two new analytic dark matter density profiles were recently proposed. The DC14 profile was derived by <cit.> from their simulated galaxies in the mass range 9.94 × 10^9 < M_halo/M_⊙ < 7.8 × 10^11 and was recently tested on samples of observed rotation curves by <cit.> and <cit.>. Both studies concluded that the DC14 profile can indeed reproduce the observed rotation curves. <cit.> also found that the derived dark matter halo parameters are in excellent agreement with the cosmological stellar mass-halo mass and halo mass-concentration relations. <cit.>, on the other hand, concluded the opposite. He found halo masses significantly below the cosmological prediction for galaxies with stellar masses M_*≲ 10^9 M_⊙, and a huge scatter of almost two orders of magnitude in the derived halo concentrations. The coreNFW model was derived by <cit.> from simulations of tiny dwarf galaxies in the mass range M_halo∼ 10^7 - 10^9 M_⊙. <cit.> also recently tested this profile on a set of dwarf galaxies extracted from the Little THINGS sample <cit.> with halo masses ranging from a few 10^8 M_⊙ to about 2 × 10^10 M_⊙. These authors also found good fits to the observed rotation curves and good agreement with the stellar mass-halo mass relation. Although the physical mechanism that drives core formation is essentially the same for both halo models, they follow a somewhat different approach. In the DC14 model the stellar mass is used as a measure for the amount of supernova feedback energy that has become available and the shape of the dark matter halo is fully determined by M_*/M_halo. In the coreNFW model, on the other hand, the core strength is regulated by the total time that the galaxy has been forming stars, while the radial extent of the core is linked to the radial distribution of the stars.The coreNFW and DC14 models also probe different mass ranges and are in this sense complementary. While it is probably not meaningful to apply the DC14 model to the rotation curves of tiny dwarf galaxies or to extrapolate coreNFW to Milky Way-size systems, the two models should both be appropriate for halo masses of the order 5 × 10^9 M_⊙≲ M_halo≲ 5 × 10^10 M_⊙. Therefore, if DC14 and coreNFW both correctly describe dark matter core formation, their predictions should agree in this overlapping mass range. In this work we apply the coreNFW and DC14 models to a selection of 13 galaxy rotation curves with halo masses of 4 × 10^9 M_⊙ to 7 × 10^10 M_⊙, and compare their predictions. The DC14 halo is further applied to an additional 7 rotation curves with halo masses up to 9 × 10^11 M_⊙ to investigate the agreement with the cosmological scaling relations. This paper is organized as follows: in Section <ref> we describe the selection of our sample of rotation curves. The principle of mass modelling and the details of the two dark matter halo models are explained in Section <ref>. Our modelling strategy is described in Section <ref> and the results are presented in Section <ref>. Finally we list our main conclusions in Section <ref>. § SAMPLE SELECTIONOur sample of rotation curves was compiled mainly from the Little THINGS <cit.>, THINGS <cit.>, and SPARC <cit.> datasets. For Little THINGS we used the publicly available rotation curve data from <cit.> and took the surface density and surface brightness profiles of the atomic gas and the stars from <cit.>. The latter were kindly provided to us by S.H. Oh and D. Hunter. The THINGS data, both the rotation curves and the baryonic profiles, were kindly made available by E. de Blok. Finally the SPARC data are publicly available and can be downloaded from the SPARC website[http://astroweb.cwru.edu/SPARC/]. For all three datasets the stellar surface brightness profiles are based on observations at 3.6 μm. For Little THINGS we selected only the galaxies that are marked as `clean dIrrs' by <cit.> and further eliminated NGC 6822 and DDO 210. For the former no rotation curve is presented by <cit.>, while the rotation curve of the latter is highly uncertain and completely dominated by the asymmetric drift correction. This leaves 9 galaxies from the Little THINGS sample. <cit.> present rotation curves of 19 THINGS galaxies, from which we eliminated 10 because of poor sampling of the rising part of the rotation curve or strong non-circular motions. The gap in mass between the Little THINGS and THINGS galaxies is bridged with a set of low mass systems from the SPARC dataset. These are selected according to the following criteria: a total 3.6 μm luminosity L_3.6 ≲ 10^9 L_⊙, a rotation curve with quality label 1, a reliable distance estimate, an inclination between 40 and 80, and little beam smearing. These criteria lead to an additional 5 galaxies. Finally we also included the rotation curve of M33 <cit.>, which is a galaxy previously claimed to have a strongly cusped dark matter halo <cit.>. For this galaxy <cit.> have not reported the stellar surface brightness profile, but have immediately derived the surface density profile of the stars from a pixel by pixel population synthesis analysis. Our total sample thus comprises 24 rotation curves. For three of the THINGS galaxies in this sample, however, the rotation curves might actually be unreliable owing to a substantial bar (NGC 925), poorly constrained distance (NGC 3521), or uncertain inclination in the outer half (NGC 7793). Since <cit.> have included these rotation curves in their analysis, we also kept them in our sample, but marked them as problematic and only showed their fits without including them in the further analysis. UGC 8490, from the SPARC dataset, was similarly marked problematic because <cit.> have discovered a strong increase in its star formation rate over the past 100 Myr. This indicates that UGC 8490 might be experiencing a starburst, which may severely bias its kinematics.Finally, <cit.> have remarked that their rotation curves of UGC 8508 and DDO 126 are unreliable up to a radius of 0.5 and 1.43 kpc, respectively, while <cit.> noted that elliptical streaming motions could be affecting the innermost data points of their rotation curve of NGC 3741, up to a radius of 1.2 kpc. <cit.> additionally found that the inner seven data points of the rotation curve of NGC 3109 could be slightly underestimated because of weak beam smearing. These same data points also have suspiciously small error bars. Since only the inner parts of the rotation curves are affected, we still included these galaxies in our `good' sample, but excluded the affected data points from the fits. The rotation curves are still sampled well enough by the remaining points. An overview of our complete sample is given in Table <ref>. For the galaxies from the THINGS and SPARC datasets we used distances from the Cosmicflows-2 catalogue <cit.>. For M33 we took over the distance from <cit.> and for the Little THINGS galaxies we used the distances from <cit.>. § ROTATION CURVE DECOMPOSITION AND HALO MODELSIn a disk galaxy the inward gravitational force that pulls a particle towards the centre is balanced by the outward centripetal acceleration from its rotation. The total gravitational potential acting on this particle is the sum of the potentials from the individual components: gas, stars, and dark matter. We can therefore writea_cpt = a_grav = a_gas + a_* + a_dm.Since the centripetal acceleration is proportional to the square of the circular velocity, this can be re-written asv^2_c = v^2_gas + v^2_* + v^2_dm,where v_gas, v_*, and v_dm are the circular velocities needed to balance the gravitational force exerted by the gas, stars, and dark matter, respectively. These are of course related to the mass distributions of the individual components. For the stars, however, the conversion between the observed luminosity and the mass is uncertain. The mass-to-light ratio Υ is therefore isolated as an unknown parameter in equation <ref>. In addition, the distribution of the gas often shows a hole in the centre. Inside this hole the gravitational pull from the gas is directed outward, giving a negative contribution to the total circular velocity. A better formulation of equation <ref> is thereforev^2_c = v_gas |v_gas| + Υ_*v^2_* + v^2_dm,orv^2_c = v_gas |v_gas| + Υ_*,B v^2_*,B + Υ_*,D v^2_*,D + v^2_dmif the stellar distribution is decomposed in bulge and disk components. The value v_* is now the circular velocity from the stars for a mass-to-light ratio of 1. Equations <ref> and <ref> form the basis for the mass modelling performed in this work. The total circular velocity v_c is measured by the rotation curve (although, see section <ref>), while the gas and stellar circular velocities v_gas and v_* are derived from their observed surface brightness profiles. For this purpose a thin disk geometry is generally assumed for the gas. The stellar distribution is usually modelled as a thick disk with an exponential or sech^2 profile in the vertical direction. This leaves the stellar mass-to-light ratio(s) and dark matter contribution v^2_dm as the only unknowns. For the latter we use two different parameterizations: the coreNFW halo and DC14 halo. We express both these parameterizations in terms of the virial radius and virial mass. The former is defined as the radius inside which the average density of the dark matter halo is equal to Δ times the critical density of the Universe ρ_crit, where Δ and ρ_crit depend on the assumed cosmology. The virial mass is simply the enclosed mass at the virial radius,M_vir = 4/3πr^3_vir Δ ρ_crit. For consistency with <cit.> we use a WMAP3 cosmology <cit.> with Δ = 93.6, H_0 = 73.0 km s^-1 Mpc^-1 and ρ_crit = 147.896 M_⊙ kpc^-3.§.§ DC14 The DC14 profile is formulated by <cit.> as a special case of the general and very flexible (α, β, γ) profile <cit.>ρ(r) = ρ_s/(r/r_s)^γ[1+(r/r_s)^α]^(β-γ)/α.At small and large radii this profile follows a power law with slopes γ and β, respectively, and the sharpness of the transition between these two regimes is governed by α. This profile reduces to a simple Navarro-Frenk-White (NFW) profile for (α, β, γ) = (1, 3, 1) and the frequently used pseudo-isothermal halo is recovered when (α, β, γ) = (2, 2, 0). <cit.> have recently used this profile in its most general form to model the dark matter haloes of M33 and a sample of THINGS galaxies. Starting from an NFW profile, but in the general formulation of equation <ref>, <cit.> allow the modification of the inner slope by stellar feedback by expressing the shape parameters α, β, and γ as a function of the integrated star formation efficiency M_*/M_halo as follows:α = 2.94 - log_10[(10^X+2.33)^-1.08 + (10^X+2.33)^2.29] β = 4.23 + 1.34X + 0.26X^2 γ = -0.06 + log_10[(10^X+2.56)^-0.68 + (10^X+2.56)],with X = log_10 (M_*/M_halo). These expressions are only valid for -4.1 < X < -1.3, which is the range probed by the simulations of <cit.>. At lower values of X, too few stars form to modify the dark matter halo. On the other hand, at X > -1.3 (corresponding to halo masses ≳ 10^12 M_⊙) processes not included in the simulations, such as AGN feedback, can start to play a role as well.The variation of α, β, and γ as a function of X is shown in Fig. <ref>. The inner log slope γ first decreases with increasing X, since a higher stellar-to-halo mass ratio implies more energy input from supernova feedback. However, it reaches a minimum at X ∼ -2.6 and goes back up at higher values of X. The reason for this turnover is the increasing gravitational potential of the stars, which at a certain point starts to dominate the feedback and pulls the dark matter back towards the centre. In the DC14 formalism more star formation therefore does not monotonically result in ever stronger cores.For an NFW halo the concentration is defined as c = r_vir/r_s, where the scale radius r_s is equal to r_-2, the radius at which the slope of the density profile becomes -2. For the (α, β, γ) profile, the meaning of r_s depends on the values of α, β, and γ, withr_-2 = (2-γ/β-2)^1/α r_s.<cit.> therefore define the concentration of their dark matter haloes asc_vir = r_vir/r_-2.The concentration of the original, unmodified NFW halo can be recovered from this asc_NFW = c_vir/1.0+0.00003e^3.4(X+4.5),with again X = log_10 (M_*/M_halo). It is this concentration that should be used to compare the DC14 halo from a fit to a rotation curve to, for examplethe mass-concentration relation.§.§ coreNFW A coreNFW <cit.> halo is essentially a NFW halo with the inner part modified by a spherically symmetric function f^n that models the effects of supernova feedback. Practically this modification is expressed at the level of the enclosed mass. For an ordinary NFW halo profile <cit.>ρ_NFW(r) = ρ_s/(r/r_s) (1+r/r_s)^2with concentrationc = r_vir / r_sthe enclosed mass at a radius r is given byM_NFW(<r) = M_vir ln (1+r/r_s) - (r/r_s)/(1+r/r_s)/ln (1+c) - c/(1+c) = M_virg_c[ln (1+r/r_s) - (r/r_s) (1+r/r_s)^-1].The coreNFW profile is then defined asM_cNFW(<r) = M_NFW(<r) f^n(r),withf(r) = [tanh (r/r_c)].The radial extent of the core is determined by the core radius r_c, which <cit.> relate to the stellar half-mass radius as r_c = ηr_1/2, with an optimal value of 1.75 for the fitting parameter η .The strength of the core is governed by the parameter n, which ranges between 0 < n ≤ 1 and is defined asn = tanh (κt_SF/t_dyn).Here κ is again a fitting parameter and the star formation time t_SF is the total time that the galaxy has been forming stars. The dynamical time t_dyn is the duration of 1 circular orbit at the scale radius in the unmodified NFW halot_dyn = 2 πr_s/v_NFW(r_s) = 2 π√(r^3_s/G M_NFW(<r_s)).The longer stars have been forming, the larger n and stronger the core. On the other hand, the bigger the original dark matter halo, the smaller n and more difficult it is to form a core. Following <cit.>, we set κ = 0.04 and choose t_SF = 14 Gyrs.§.§ Asymmetric drift correctionThe gravitational attraction from the gas, stars, and dark matter is in fact not balanced solely by circular motion, but also for a small part by the internal pressure of the gas. The observed rotation velocity v_rot is therefore not exactly equal to the circular velocity v_c from equation <ref>. Instead it is given byv^2_rot = v^2_c + [R/ρ∂ (ρσ^2_R)/∂ R + σ^2_R - σ^2_ϕ + R∂ (v_Rv_z)/∂ z]<cit.>, where ρ and σ are the density and velocity dispersion of the gas. The asymmetric drift correction (term inside the square brackets) is usually simplified under the assumptions that the velocity dispersion is isotropic (σ_R = σ_ϕ), the velocity ellipsoid is aligned with the cylindrical coordinate system (v_Rv_z = 0), and the vertical scale height does not change much with radius. This leads tov^2_c = v^2_rot - R/Σ∂ (Σσ^2)/∂ R,where Σ is the surface density of the gas. The observationally derived radial Σσ^2 profile is typically rather rugged, leading to sometimes strong and unphysical fluctuations in its derivative. To avoid this, a smooth function is fitted to the profile and the derivative is determined analytically. The simplifications involved in deriving equation <ref> limit its accuracy. As a consequence, equation <ref> only provides an order of magnitude estimate of the correction. For the rotation curves taken from the Little THINGS and SPARC datasets the asymmetric drift correction is already taken into account by the authors. On the other hand, <cit.> and <cit.> did not consider asymmetric drift for their THINGS and M33 rotation curves. We therefore evaluated this correction based on equation <ref>. Depending on the shape of the Σσ^2 profile, we used one of the following analytic functions:Σσ^2(R) = I_0R_0+1/R_0+e^α Rfor a profile with a central core <cit.>, andΣσ^2(R) = I_0(1+R/R_0)^α e^-R/R_0for a profile showing a hole in the centre <cit.>. In the inner halves of the rotation curves the derived corrections are consistently much smaller than the error bars and generally only of the order of 1 km s^-1 or less. For four galaxies the corrections become more substantial (of order 5 - 10 km s^-1) near the outer edge of the rotation curve. However, because these larger corrections occur far from the centre, we found that they have only little effect on our fits and do not change any of our conclusions. In addition, the agreement between the Σσ^2 profile and the analytic function is often not very good in these regions, making the corrections uncertain. This is illustrated for NGC 3621 in Fig. <ref>. Because of this uncertainty and the very limited effect the corrections have on our results, we decided to use the original, uncorrected rotation curves in our analysis. § MARKOV CHAIN MONTE CARLO FITTING §.§The dynamical models were fitted to the rotation curves with [http://dan.iel.fm/emcee/current/] <cit.>, which is an open-source python implementation of the affine invariant MCMC ensemble sampler from <cit.>. Markov Chain Monte Carlo or MCMC <cit.> is a sampling technique that has been applied to the decomposition of rotation curves for several years <cit.>. It is more efficient in sampling the parameter space than the fitting techniques used in earlier works and has the big advantage that it returns the full multidimensional probability distribution of all the parameters instead of only the best-fit model. In addition MCMC allows us to include physical knowledge about the parameters in the fits via so-called priors that are combined with the likelihood function. Thealgorithm explores the N-dimensional parameter space with different, randomly initialized walkers that each make their own MCMC chain. An initial burn-in phase is used to allow the walkers to move to the relevant high-likelihood areas of the parameter space. After this the walkers are reinitialized at their current positions and the actual MCMC chains are made. As a last step the chains of all the walkers are combined to form the final MCMC chain. For every fit we used 100 walkers, each taking 2000 steps of which the first 1000 were used as burn-in. These numbers are in line with therecommendations <cit.> and the values typically used in other works <cit.>, and ensured good convergence of our fits (see below). Our likelihood function isℒ = e^-χ^2/2.For a multi-modal posterior distribution, part of the walkers can get stuck in isolated low probability modes if they are initialized randomly over the full range of the parameter space (within the imposed boundaries). This generates numerous irrelevant peaks in the retrieved posterior distribution. We therefore performed each fit in two iterations. First the walkers were initialized randomly over the full relevant range of parameter space. The different peaks in the posterior distribution were then investigated to find the mode with the highest likelihood. Next, as a second iteration, we redid the fit with the walkers now initialized in a small Gaussian ball centred on this mode and withequal to 1 percent of the allowed range for each parameter. The parameter values that are used in the figures below and reported in Table <ref> correspond to the maximum likelihood model for each fit.For good performance, an MCMC sampler should be run for at least a few (about 10) autocorrelation times and should have an acceptance fraction between 0.2 and 0.5 <cit.>. With 1000 steps taken by each walker, the first condition was well met for all the fits. Appropriately settingtheproposal scale parameter to a value of 2 or 3 ensured that the second condition was also met. Finally we checked the convergence of the MCMC chains by performing each fit three times and evaluating the Gelman-Rubin eigenvalues with the [https://pypi.python.org/pypi/GetDist/] python package. These values were well below 1 for all the fits, indicating good convergence. §.§ Priors and parameter ranges The coreNFW halo fits were performed with log_10 M_vir, c, and Υ (or Υ_d and Υ_b) as free parameters. We use the log of M_vir instead of M_vir itself as a parameter in the fits because of the large dynamical range involved. Flat priors were assumed for all free parameters. Log_10 M_vir and c were loosely constrained inside 8 < log_10 (M_vir/M_⊙) < 14 and 1 < c < 100. The 3.6 μm mass-to-light ratio was confined to the range 0.3 < Υ_3.6 < 0.8, as motivated by the constraints from <cit.> and <cit.>. For M33 we allow the initial stellar mass to vary by a factor 0.758 < Υ < 1.319 based on the uncertainty that is mentioned in Section 6 of <cit.>. Following <cit.>, η, κ, and t_SF were kept fixed at 1.75, 0.04, and 14 Gyrs, respectively.For the fits with the DC14 halo, we let V_vir, c_vir, and Υ (or Υ_d and Υ_b) free and again used a flat prior for each of these. Following <cit.> we used wide ranges of 10 < V_vir/(km s^-1) < 500 and 1 < c_vir < 100 for the first two parameters and the same range as before for the mass-to-light ratio: 0.3 < Υ_3.6 < 0.8 (and 0.758 < Υ < 1.319 for M33). Since our goal is to find models that fit the rotation curves well and yield physically acceptable dark matter haloes at the same time, we further imposed the cosmological halo mass-concentration and stellar mass-halo mass relations as log-normal priors in the fits. For the DC14 halo the shape parameters α, β, and γ are expressed as a function of log_10 (M_*/M_halo), where M_halo = M_vir. Hence, α, β, and γ depend on the definition of the virial mass, which depends on the assumed cosmology. Since <cit.> have assumed a WMAP3 cosmology, we did the same in our fits and we used the M_halo-c relation from <cit.> that was derived under this cosmology. The M_*-M_halo relation from <cit.> has also used the WMAP3 values. However, this relation was derived from abundance matching using the SDSS DR3 stellar mass function for halo masses down to ∼ 3 × 10^10 M_⊙ and is an extrapolation at lower masses. As pointed out by <cit.>, this extrapolation is not consistent with the newer and deeper SDSS data, which means that the <cit.> relation is actually not reliable for halo masses below ∼ 3 × 10^10 M_⊙. Indeed the stellar mass-halo mass relation from <cit.>, which is based on the newer SDSS data and the cosmological parameters used in the Bolshoi simulations <cit.>, is much shallower and diverges significantly from the <cit.> relation at low halo masses. Since a number of the galaxies in our sample fall in this low mass regime we opted to use the relation from <cit.> rather than that from <cit.> as a prior in our fits. For coreNFW the parameterization is independent of the assumed cosmology and depends only on the global original NFW profile (i.e. before alteration by stellar feedback) and on the stellar half-mass radius and total star formation time. §.§ UncertaintiesUsing the GetDist package, the uncertainties for the different parameters were determined from the multidimensional 68% confidence region of the full posterior distribution, as the extremal values of the projection of that region onto each parameter axis. As such the error bars give a good indication of how tight the constraints are for a certain parameter, but they should not be over-interpreted as the absolute range of good models. Indeed, if the fit quality of the best-fit model is very high, many models outside of the N-dimensional 68% confidence region often still provide an acceptable fit to the data. On the other hand, if we plot, for example, the halo mass versus its concentration, the area suggested by the two (orthogonal) error bars is often larger than the actual area to which the models from the MCMC chain are confined (i.e. the projection of the multidimensional confidence region onto the M_halo-c plane). This is illustrated in Figure <ref> for the best-fit DC14 model of UGC 7603.§.§ χ^2_red and fit quality In the discussion of our results we express the quality of the fits to the rotation curves in terms of the reduced chi-squared statistic (χ^2_red). In the ideal case where the uncertainties on all the rotation curves are Gaussian and derived in a uniform way, and where all the points of a rotation curve have equal importance, this would be a good measure to compare the fit qualities for all the galaxies in our sample. In reality, however, our rotation curves are compiled from the literature with differing data quality and techniques used to estimate the error bars. In addition some rotation curves keep rising up to the last point, whereas others, for the more massive galaxies, include a large flat part. The latter is generally easier to reproduce and can have a large impact on the χ^2_red value of a fit, but is at the same time much less important in the analysis of core formation. For these reasons the χ^2_red values of our fits are only meaningful to compare the quality of different fits for the same galaxy and not to compare fits for different galaxies. § RESULTSFollowing the procedure outlined in section <ref> we have fitted DC14 halo models to each of the rotation curves in our sample and coreNFW models to the rotation curves of the Little THINGS and SPARC galaxies. The best-fit parameters and χ^2_red values of these fits are listed in Table <ref>. The results of the fits are discussed in the sections below. §.§ DC14Figure <ref> shows the individual DC14 models for the galaxies in our `good' sample. As can be seen, the DC14 halo generally provides excellent fits to the rotation curves, confirming the recent results from both <cit.> and <cit.>. The only clear exception to this is the rotation curve of DDO 168, where the model overestimates the data in the inner part. However, the inner three points of the rotation curve are in fact already well accounted for by the gravitational potential of the gas alone, so any model with a non-zero contribution of the dark matter at these radii will overestimate the data. Similar arguments also hold for the very inner regions of NGC 2366 and NGC 3198. In Figure <ref> we compare the best-fit parameter values from our models with the cosmological halo mass-concentration and stellar mass-halo mass relations. These relations were derived from dark matter-only simulations (in combination with abundance matching). To account for this in the comparison, we scale our inferred halo masses as M_vir/(1-f_b), where f_b is the Universal baryon fraction <cit.>. The fits show excellent agreement with both scaling relations, although, somewhat surprisingly, our models seem to favour the M_halo-c relation from <cit.> that is based on the Planck cosmology over the <cit.> relation that was used as prior in the fits. The only galaxy that falls significantly outside the 2σ scatter of the <cit.> relation is NGC 3198, but models with the concentration forced inside this scatter actually still provide a good fit to the data.Although the sample size is limited, our analysis thus seems to confirm the recent conclusion by <cit.> that the DC14 halo is in good agreement with ΛCDM and can recover the predicted mass-concentration and stellar mass-halo mass relations in a sample of observed rotation curves. The latter might seem obvious since we use priors that `push' our fits towards these relations. The key point here, however, is that DC14 can produce dark matter haloes that simultaneously provide good fits to the rotation curves and agree with the scaling relations; this is something that, for example the NFW halo, cannot do. The priors do not `make' physical solutions according to the scaling relations, but merely act as a filter to retain only the most physical solutions if they exist. The fact that our models prefer the <cit.> relation over the <cit.> relation that was used as prior is a good illustration of this. Our analysis contradicts the results from <cit.>. We did not recover the huge range of halo concentrations that he found and we found no evidence for his claim that galaxies with M_*≲ 10^9 M_⊙ often reside in less massive haloes than predicted. It should be noted here that the modelling strategy from <cit.> is somewhat different to our approach and that used by <cit.>. <cit.> used multi-nested sampling <cit.> to fit his models and did not assume any physical priors between the parameters. The lack of priors results in posterior distributions that often contain multiple modes (or peaks). The mode with the lowest halo mass was selected as the final mode and galaxies for which the modes were too wide or not well separated were discarded. To investigate the effects of these differences, we performed a second set of fits without physical priors and used the criteria from <cit.> to select the final models. The results are shown in Fig. <ref>. CVnIdwA, UGC 8508, NGC 3741, NGC 3109, IC 2571, and NGC 2976 were discarded because their posterior distributions showed multiple blended modes; this was not the case in the original fits because the priors suppressed the additional modes. The best-fit model for WLM has shifted significantly. It is still consistent with the mass-concentration relation but now lies considerably below the stellar mass-halo mass relation. For the remaining galaxies the new models are essentially similar to the previous models or consistent within the uncertainties. The agreement with the scaling relations is still remarkably good, and although the number of galaxies is limited we find no evidence for the strong deviations that were reported by <cit.>.§.§ CoreNFWFor the coreNFW halo we limited the sample to the lower mass galaxies with M_halo≲ 7 × 10^10 M_⊙. The individual coreNFW fits for these galaxies are shown in Figure <ref>. The models again provide a decent description of the data with no clearly bad fits except for DDO 168. The fit results are compared to the cosmological scaling relations in Fig. <ref>. The agreement is very good and our models again seem to prefer the mass-concentration relation from <cit.> over that from <cit.>. The coreNFW halo was also fitted to the same rotation curves by <cit.>. Their results for the individual galaxies sometimes differ significantly from what is found in this work: <cit.> have generally found somewhat lower halo and stellar masses, higher concentrations, and a better fit quality (lower χ^2_red). These differences are, however, not unexpected. Indeed, while we derived the stellar and gas contributions from the measured surface density profiles from <cit.>, <cit.> used smooth exponential profiles based on <cit.> and <cit.>. In addition, <cit.> used the M_200 formalism while we used the virial mass M_vir, so the halo parameters given in their Table 2 should not be compared directly to our values in Table <ref>. For an identical dark matter halo, our virial mass and concentration should be somewhat higher than the M_200 and c_200 values from <cit.>. The fact that we generally find lower concentrations probably comes from the fact that <cit.> did not use a mass-concentration prior, but instead set the boundaries for the concentration range based on the M_vir-c relation from <cit.> and the extremities of their M_200 range. The rotation curve of NGC 2366 that was reported by <cit.> also seems somewhat different from the curve that was used by (or at least shown in Fig. A2 of) <cit.>. Despite these individual differences, the main conclusions remain the same. Both works generally find acceptable fits to the data and a good agreement with the stellar mass-halo mass relation.§.§ coreNFW versus DC14The coreNFW halo model determines the strength of the dark matter core from the total time that the galaxy has been forming stars, while the radial extent of the core is related to the spatial distribution of the stars via the stellar half-mass radius. The stellar mass of the galaxy is not used. This is an important difference with the DC14 profile. On the one hand, DC14 uses M_* instead of t_SF as a measure of the amount of supernova feedback energy that has become available to form a core. On the other hand, this model also uses the additional gravitational potential due to M_* as a mechanism to counteract core formation. In addition, the coreNFW profile is essentially a pure NFW profile with its inner part flattened by feedback, while for DC14 the entire shape of the profile (i.e. γ, α, and β) changes as a function of the stellar-to-halo mass ratio. Given these rather different approaches, it is interesting to investigate how the dark matter haloes inferred by the coreNFW and DC14 halo models compare across our sample. Since the physical mechanism that drives core formation is essentially the same for both models (supernova feedback after bursts of star formation), they should in principle give similar results in the overlapping halo mass range where they are both appropriate.§.§.§ Fit quality From Table <ref> we see that the DC14 and coreNFW models generally give a similar fit quality. The DC14 model typically has a slightly lower χ^2_red, with only WLM and UGCA 442 breaking this trend, but in most cases both models represent the data well enough and it is not meaningful to classify one as better than the other. The latter also holds for DDO 168, which is poorly fitted by both models. As discussed in Section <ref> this is probably caused by a problem with the data. The only cases in which DC14 performs better than coreNFW are NGC 3741, DDO 87, and DDO 154. For NGC 3741 and DDO 87 the rotation curves are fitted significantly better by the DC14 model than by the coreNFW model, although the latter is also still acceptable. For DDO 154 the fit quality is good in both cases, but the coreNFW halo needs an unphysically high concentration to achieve this. coreNFW models with a lower concentration do not fit the data well. In each of these cases the problem seems to be related to the connection between the coreNFW core radius and the stellar half-mass radius. For NGC 3741 the rotation curve suggests an extended dark matter core, while the coreNFW model has a cuspy NFW shape in all but the most central region because of the tiny half-mass radius of the stars. For the other two galaxies the rotation curves require a smaller core and a less `linear' dark matter contribution than derived from R_1/2. These issues can be resolved by making η a free parameter in the coreNFW fits, but this would break with the prescription of <cit.> and in a way make the comparison with the DC14 model unfair. For NGC 3741 this might be justifiable. Indeed, while the stellar distribution of NGC 3741 is unusually compact, the DC14 model finds sufficient stellar mass (i.e. supernova feedback energy) to form a substantial core that is in agreement with the data. Furthermore, the distribution of the gas is much more extended than that of the stars. Since the gravity of the outflowing gas after a supernova explosion drives the formation of a dark matter core, we could interpret the extended gas distribution in NGC 3741 as a sign that the dark matter core radius is actually larger than that inferred from the half-mass radius of the stars. Therefore we performed a second coreNFW fit for NGC 3741 with η as an additional free parameter (using a flat prior such that 0 < r_c (kpc) < 7). The decomposed rotation curve is given in Figure <ref>. With a larger core radius of 4.06 kpc (η = 11.2) the coreNFW model is now in excellent agreement with the data and with the DC14 model. This value of η is significantly above the upper limit η = 2.75 derived by <cit.>. However, this upper limit was derived under the assumption that R_1/2∼ 0.015r_200, which is also significantly larger than the value R_1/2 = 0.341 kpc that is measured from the stellar distribution. For DDO 154 and DDO 87 the stellar distribution does not seem particularly unusual, although the inferred stellar half-mass radii are significantly larger by a factor of 3.9 and 1.8, respectively than those reported by <cit.>. Nevertheless, this is also the case to some degree for CVnIdwA and DDO 52 (factors of 1.6 and 1.4), for which DC14 and coreNFW give similarly good fits. It is beyond the scope of this work to investigate whether this points to a problem with the <cit.> data, and we limit ourselves to the conclusion that, based on the surface brightness profiles from <cit.>, the DC14 model performs better than the coreNFW model for the rotation curves of DDO 154 and DDO 87. §.§.§ Best-fit parameters In Figure <ref> we compare the main parameters of the best-fit DC14 and coreNFW models for the galaxies in our sample with M_halo≲ 7 × 10^10 M_⊙. With the exception of the unphysically high concentration of the coreNFW model of DDO 154, we find a fairly good agreement between the best-fit parameter values from both models. The stellar and virial masses follow the identity line, albeit with some scatter and the two models are generally consistent within the error bars. The DC14 model typically finds somewhat less extended cores (as can be seen by comparing the individual rotation curve decompositions) and more concentrated haloes, although except for DDO 154 the concentrations are consistent within the errors. For the galaxies where the concentration difference is the highest, the DC14 model typically also finds somewhat lower stellar and halo masses than the coreNFW model, which is consistent with the cosmological scaling relations. The bottom right panel in Fig. <ref> shows the log slope of the dark matter density profiles measured at the innermost point of each rotation curve. This parameter strongly depends on the local shape of the density profile and should not be over-interpreted for the comparison between the two haloes. The main conclusion to be drawn is that both models find relatively cored dark matter haloes for all the galaxies with inner log slopes that are generally not too far apart and often consistent within the uncertainties. Unlike what is claimed by <cit.>, we therefore find that the DC14 model can still show significant cusp-core transformations for galaxies with M_halo≲ 10^10 M_⊙. §.§ Problematic galaxies For completeness we show the individual fits to the problematic rotation curves in appendix <ref>. Using the flexible (α, β, γ) profile, <cit.> find good fits to the rotation curves of NGC 925 and NGC 7793 and also report tight constraints on the inner log slope of NGC 3521 despite the poor fit of their model to the rising part of the rotation curve. In contrast, we find that the two physically motivated halo models investigated here cannot reproduce any of these rotation curves. For NGC 3521 we even perform additional fits at the much smaller distance of 7.7 Mpc reported by SPARC, but with the same result. Finally the cuspy rotation curve of UGC 8490 is well reproduced by the DC14 model, but coreNFW strongly underestimates the rising part. However, for both these models the parameters lie well outside the cosmological scaling relations. § CONCLUSIONSFor a compact sample of 13 galaxies, spanning the mass range M_halo∼ 4 × 10^9 - 7 × 10^10 M_⊙, we have used MCMC to construct dynamical models of the rotation curves based on two recently proposed dark matter density profiles: the DC14 halo and coreNFW halo. We further applied the DC14 halo to an additional set of higher mass galaxies with M_halo∼ 10^11 - 9 × 10^11 M_⊙. The coreNFW and DC14 halo models both use supernova feedback after bursts of star formation to transform primordial dark matter cusps into flatter cores. Although they were derived from simulations in complementary mass ranges (M_halo∼ 10^7 - 10^9 M_⊙ for coreNFW versus 10^10 - 8 × 10^11 M_⊙ for DC14), both models should be valid for halo masses of 5 × 10^9 M_⊙≲ M_halo≲ 5 × 10^10 M_⊙. With this analysis we investigated whether the coreNFW and DC14 halo models converge to the same solutions in this overlapping mass range and whether their predictions agree with cosmological scaling relations. We found that both models are generally able to reproduce the rotation curves in our sample and find dark matter haloes that are in good agreement with the cosmological M_halo-c and M_*-M_halo relations, alleviating the cusp-core controversy. This confirms the results from <cit.> for coreNFW and from <cit.> for DC14. On the other hand, we find no evidence of the huge scatter in concentrations or the disagreement of the DC14 predictions with the M_*-M_halo relation that were recently claimed by <cit.>, even if a similar modelling strategy is used. The two models generally give similarly good fits to the rotation curves, although the DC14 model does perform better in three cases. For NGC 3741 and DDO 87 the rotation curves are fitted significantly better by the DC14 model than by the coreNFW model, although the latter is also still acceptable. For DDO 154 the fit quality is good in both cases, but the coreNFW halo needs an unphysically high concentration to achieve this. In each of these cases the problem for coreNFW is related to the connection between the core size and the stellar half-mass radius. For NGC 3741, a galaxy with a very compact stellar distribution but a remarkably extended atomic gas disk, we argue that it is justifiable to relax this connection and use η as a free parameter in the coreNFW fit. With a larger core radius the coreNFW model is in excellent agreement with the data. The DC14 and coreNFW haloes generally converge to (approximately) the same solution, as they should. Both models find cored dark matter haloes, and while DC14 tends to predict somewhat less extended cores and more concentrated haloes, the stellar masses, halo masses, and concentrations from both models are generally comparable and agree within the errors.We thank the anonymous referee and the language editor for the useful comments and suggestions that significantly improved this paper. We also wish to thank E. de Blok for sharing his rotation curves and surface density profiles of the THINGS galaxies and D. Hunter and S.H. Oh for sharing the stellar and gas surface brightness profiles of the Little THINGS galaxies. F.A., M.B., and G.G. acknowledge the support of the Flemish Fund for Scientific Research (FWO-Vlaanderen). This research has made use of NASA's Astrophysics Data System and the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. aa§ PROBLEMATIC ROTATION CURVES
http://arxiv.org/abs/1706.08296v1
{ "authors": [ "F. Allaert", "G. Gentile", "M. Baes" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170626091523", "title": "Testing baryon-induced core formation in $Λ$CDM: A comparison of the DC14 and coreNFW dark matter halo models on galaxy rotation curves" }
Orbital variations and outbursts of the unusual variable star V1129 Centauri[Based partially on observations taken at theObservatório do Pico dos Dias / LNA] Albert BruchLaboratório Nacional de Astrofísica, Rua Estados Unidos, 154, CEP 37504-364, Itajubá - MG, Brazil(Published in: New Astronomy, Vol. 57, p. 51 – 58 (2017)) The variable star V1129 Cen is classified in the GCVS as being ofβ Lyr type. Unusual for such stars, it exhibits outburstsroughly once a year, lasting for ∼40 days. For this reason, a relationship to the dwarf novae has been suspected. Here, for the first time a detailed analysis of the light curve of the system is presented. Based on observations with high time resolution obtained at the Observatório do Pico dos Dias and on the long term ASAS light curve the orbital variations of the system are studied. They are dominated by ellipsoidal variations and partial eclipses of a probably slightlyevolved F2 star in a binary with an orbital period of21^.3ex h 26^.3ex m. Comparison with the characteristics of dwarf novae showthat the observational properties of V1129 Cen can be explained if it is just another dwarf novae, albeit with an unusually bright and early type mass donor which outshines the accretion disk and the mass gainer to a degree that many normal photometric and spectroscopic hallmarks of cataclysmic variables remain undetected..0em Keywords: Stars: binaries: eclipsing –Stars: variables: general –Stars: novae, cataclysmic variables –Stars: dwarf novae – Stars: individual: V1129 Cen § INTRODUCTIONCataclysmic variables (CVs) are binary stars where a Roche-lobe fillinglate-type component (the secondary) transfers matter via an accretion diskto a white dwarf primary. A particular subclass of CVs are the dwarf novae which occasionally exhibit outbursts with amplitudes of a few magnitudes, lasting from days to weeks. These are caused by a temporary increase of the brightness of the accretion disks in these systems.It may be surprising that even after decades ofintense studies of CVs there are still an appreciable number of known orsuspected systems, bright enough to be easily observed with comparatively small telescopes, which have not been studied sufficiently for basic parameters to be known with certainty. In some cases eventheir very class membership still requires confirmation. Therefore, I started a small observing project aimed at a better understanding of these stars. First results have been published byBruch (2016, 2017a) and Bruch & Diaz (2017). Here, I present timeresolved photometry and a limited amount of spectroscopy of the unusual system V1129 Cen. To these data I add long term observationsretrieved from the ASAS-3 data archive (Pojmanski 2002).V1129 Cen is not a normal CV. In fact,the relationship of the star to the cataclysmic variables is quite unclear. In spite of its high brightness of∼9^.3ex m_.6ex.7 not many details are known about the star. It is classified as a β Lyrtype eclipsing binary in the 17^ th name list of variable stars(Kazarovetz et al. 2008).β Lyr systems are binaries made up of stars in tight or even semi-detached orbits. Their evolutionary state may range from two mainsequence stars to a pair with a highly evolved secondary component and a less evolved primary with mass transfer between them (Hoffman et al. 2008).Due to the proximity of the stellar components the light curves aredominated by ellipsoidal variations often in combination with mutualeclipses. In the particular case of V1129 Cen, however, apart from variations typical for such stars, recurring at a period of 0.893025 days,S. Otero[The internet links to the corresponding communicationscited in the online version of the Ritter & Kolb catalogue(http://varsao.com.ar/NSV_19488.htm) or in Walter et al. (2006)(http://ar.geocities.com/varsao/NSV_19448.htm) appear not be active any more.] found faint outbursts with a duration of ∼40 days recurringon the time scale of one year. The ASAS (Pojmanski 2002) long term lightcurve contains several such events which reach an amplitude of up to0^.3ex m_.6ex.6 (upper frame of Fig. <ref>).The spectral type of F2 V of V1129 Cen (Houk 1978) is later than that of the large majority of β Lyr stars but much earlier than that of the donor star in any CV. Unusual for a star of this type, Walter et al. (2006)observed emission of He II λ 4686 Å on 2006, Jan 16.2 UT which, however, was absent on 2006, Jan 19.3 UT. Both of these observations ocurred during an outburst as is shown by the insert in the figure, where the corresponding epochs are marked by vertical lines. The authors leave the question open whether the emission was transient or if the source was eclipsed during the second observation. It is not clear what causes this unusual (for aβ Lyr star) behaviour. Ritter & Kolb (2003) have included the star asa possible U Gem type dwarf nova in the on-line version of their catalogue. psfIf the system indeed contains a dwarf nova or behaves like one, persistent masstransfer through an accretion disk should take place and thus flickering should be expected to be present. Whether this would be observable or not depends on the degree of modulation of the flickering light source and its relative contribution to the total light of this peculiar system. In order to verify the presence of flickering and to investigate the question whether or not the propertiesof V1129 Cen are compatible with a dwarf nova classification, I observedthe star on several occasions in 2014, 2015 and 2016. Because of their superior quality I will concentrate here on the 2016light curves. These data are complemented by observations retrieved from the ASAS data archive. Additionally, I obtained a few spectra in 2015 in orderto verify the eventual presence of emission lines as observed by Walter et al. (2006).This study is organized as follows: In Sect. <ref> the observations and data reduction techniques are briefly presented. Sect. <ref> then deals with the results of the observations and of model calculations. A discussion follows in Sect. <ref>. Finally, the conclusions are briefly summarized in Sect. <ref>.§ OBSERVATIONS AND DATA REDUCTIONSAll photometric observations were obtained at the 0.6-m Zeiss and the0.6-m Boller &Chivens telescopes of the Observatório do Pico dos Dias (OPD), operated bythe Laboratório Nacional de Astrofísica, Brazil.Time series imaging of the field around the target star was performed using cameras of type Andor iKon-L936-B and iKon-L936-EX2 equipped withback illuminated, visually optimized CCDs. A summary of the observations isgiven in Table <ref>. Some light curves contain gaps caused by intermittent clouds or technical reasons. In order to resolveany rapid flickering variations the integration times were kept short. Together with the small readout times of the detectors this resulted in a time resolution of the order of5^.3ex s. In contrast to observations of other targets within the observing project mentioned in Sect. <ref>, in spite of the short integration times the high brightness of V1129 Cen not only permitted but demanded (in order to avoid saturation) the use of a filter. A B filter was chosen. Even so, I did not perform a rigorous photometric calibration but express the brightness asthe magnitude difference between the target and the nearby comparison star UCAC4 223-607051 (B = 13.564; Zacharias et al. 2013),the constancy of which was verified through the observations of severalcheck stars[It seems that the comparison star shows smallvariations on the time scale of months and years which, however, have no bearing on the results of this study.]. The average nightly B magnitudeof the target is included in Table <ref>.In addition to the photometric observations, eight spectra of 600 secexposure time were obtained on 2015, February 14, at the 1.6-mPerkin Elmer telescope of OPD. An Andor iKon-L936-BR-DD camera was employed.Exposures of a He-Ar lamp for wavelength calibration were taken after everysecond stellar exposure. From the FWHM of the lines in the comparisonspectra a spectral resolution of ≈4 Å is estimated.Basic data reduction (biasing, flat-fielding) was performed using IRAF.For the construction of light curves aperture photometry routinesimplemented in the MIRA software system (Bruch 1993) were employed. The same system was used for all further data reductions and calculations.Throughout this paper time is expressed in UT. However, whenever observations taken indifferent nights were combined (e.g., to fold them on the orbital period) time was transformed into barycentric Julian Date on the BarycentricDynamical Time (TDB) scale using the online tool provided byEastman et al. (2010) in order to take into account variations of thelight travel time within the solar system.§ RESULTSThe light curves are dominated by variations on hourly time scales, reflecting the β Lyr type variations, but contain no obvious flickering. As examples, Fig. <ref> shows two light curves of 2016, April 7 and 8. The black dots represent the data points at the original time resolution, while the same data, binned in intervals of 2 minutes,are shown in red.Before dealing in more detail with the issue of flickering, I first turn tothe β Lyr type variations of V1129 Cen. To this end, the individual light curves were folded on the above quoted period, using as zero point of phase the epoch of primary minimum (as cited on the AAVSOInternational Variable Star Indexwebpage[https://www.aavso.org/vsx/]). In two nights a small magnitude adjustment was applied, calculated from thedifference of the differential magnitudes in the respective phase intervalsduring the night in question and the other nights. This is probably due to slight variability of the quite red(B-V = 1^.3ex m_.6ex.53;Zacharias et al. 2013) primary comparison star as revealed by a comparisonwith two check stars.The resulting β Lyr type light curve, shown in Fig. <ref>, binned in phase intervals of width 0.005, does not cover all phases. Moreover, a small shift of the primary minimum with respect to phase 0 (alreadycorrected for in the figure) was observed. Its magnitude was determined tobe 0.025 ± 0.001 by fitting polynomials of various degrees to the minimum. This means that the period communicated by Otero requires a slight correction. The observed phase shift, the minimum epoch (referring to 2002) and theminimum epoch observed in 2016 then permit to calculate updated ephemeriesfor V1129 Cen:BJD_min = 2457483.584 (1) + 0.8930290 (2) ×E 0emwhere E is the cycle number. This does, or course, not take into account a possible period variations such as that observed in the prototype star β Lyr (Harmanec & Scholz 1993) at a much higher rate (19 sec/yr) thatany possible variation implied by the difference of Otero's period and thepresent value.For comparison, the ASAS-3 data were also folded on the orbital period (rejecting the observations taken during outbursts; Fig. <ref>, bottom). While noisy, the β Lyr type variations are obvious. The lower amplitude compared to Fig. <ref> may be due to the different passband of the ASAS-3 data (V vs. B). §.§ SpectrumHaving in mind the report of Walter et al. (2006) of transient He IIλ 4686 Å emission in the spectrum of V1129 Cen, I obtained the spectroscopic observations mentioned inSect. <ref>. The mean of eight individual exposures is shown in Fig. <ref> (black curve). Since no flux calibration of the spectra was performed it is shown here normalized to the continuum. For comparison, standard starspectra of spectral type F0 V and F3 V (i.e., close to the spectral type ofV1129 Cen), taken from the compilation of Jacoby et al. (1984) and normalized in the same way are also shown in the figure (shifted upward and downward for clarity). Their resolution was degraded tomatch that of V1129 Cen. No trace of λ 4686 Å emission is seen.§.§ FlickeringI turn my attention now to the implications of the absence of detectable flickering in the light curves of V1129 Cen. I first determine the scatter of the data points of the binned versions of the light curves shown in Fig. <ref> (adding also the nightof 2016, April 6) after subtraction of the orbital variations. To this end aGaussian was fit to the distribution of the difference between data points of the binned light curves and a Fourier filtered version of the same data whichremoves variations on time scales>30^.3ex m. In all nights it has a FWHM of aboutδ m_0 = 0^.3ex m_.6ex.01. What must be the magnitude difference between a flickering light source and a brighter constant star in order to render the flickering unobservable? Assuming the presence of a light source in the system which flickerssuch that a light curve treated in the same way as above leads to a distribution of data points with a FWHM of δ m it is possible to calculate as a function of δ m the magnitude difference Δ m of that lightsource and of the entire system necessary for the observed FWHM notto exceed δ m_0. This leads to the relationship shown inFig. <ref>. Considering that the total amplitude of the flickering variations is significantly larger than the FWHM of thedistribution of data points [e.g., in V504 Cen Bruch (2017b) observed a total amplitude of0^.3ex m_.6ex.62,while the FWHM does not exceed0^.3ex m_.6ex.16] the range ofδ m in the figure extends to extremely strong flickering. According to Gaia DR1 (Brown et al. 2016),V1129 Cen has a parallax of 3.21 ± 0.40 mas. This translates into a distance of 312 ± 39 pc. The ASAS-3 long termlight curve shows that the V magnitude varies between9^.3ex m_.6ex.47 and 9^.3ex m_.6ex.96 (disregarding outburst), with an average of9^.3ex m_.6ex.65. While the transformation of ASAS magnitudes to a standard photometric systemmay not be particularly accurate, errors are expected not to exceed a typical value of0^.3ex m_.6ex.05[http://www.astrouw.edu.pl/ gp/asas/explanations.html].Moreover, the average magnitude of 9^.3ex m_.6ex.65is identical to the V magnitude cited in the Tycho-2 catalogue(Høg et al. 2000).The interstellar absorption towards V1129 Cen appears to be small. The observed coloursB-V=0.38; U-B=0; Kilkenny & Laing 1990)[However, the authors marked these values as uncertain.] match quite well with those of an unreddened star of the same spectral typeF2 V; B-V = 0.35, U-B = 0.00; FitzGerald 1970.Neglecting thus absorption, the distance and the apparent average magnitude translate into an absolute magnitudeof M_V = 2.18 ± 0.29, slightly(∼0^.3ex m_.6ex.75) brighter than anF2 type main sequence star (from interpolation in thetables of Allen 1973). This difference cannot be explained by the contribution of the binary companion to the total system light since modelcalculations (Sect. <ref>) show this contribution to be much less than that of the F2 star. The difference rather indicates that the latter has slightly evolved off the main sequence.The quiescent magnitudes of dwarf novae encompass a wide range. Fig. 3.5 of Warner (1995) suggests that ordinary U Gem stars have a brightness fainter thanM_V = 7^.3ex m_.6ex.2. While in long period systems it includes a non-negligible contribution of the mass donor, for the sake of a conservative upper limit I consider this value to be themagnitude of a possible accretion disk in V1129 Cen. The magnitude difference between the entire system and the accretion disk is thus at least Δ m =-5^.3ex m_.6ex.0. Comparing this value with the graph inFig. <ref> it is obvious that the not flickering light sources in V1129 Cen can easily hide any flickering even if the disk light would be 100% modulated.§.§ Model calculationsThe complete phase coverage of the ASAS data warrents an attempt to model the light curve of V1129 Cen in the expectation that some system parameters can be delimited. To this end, I employ the Wilson-Devinney code (Wilson & Devinney 1971, Wilson 1979) as implemented in MIRA. Before proceeding, a word on nomenclature is in order to avoid confusion with nomenclature usually used in CV research: I will refer to the optically dominating F2 star as the primary component, independent whether it is the mass gainer or mass loser (if there is mass transfer in the system) or whether it is the more massive or the less massive star. It will bedesignated by the index 1 subsequently. Consequently its companion is thesecondary star (index 2). The mass ratio is defined as q = M_2/M_1.Considering the large number of model parameters required by theWilson-Devinney code to calculate a light curve it is appropriate to fix as many of them as possible before trying to adjust the model light curve to the observed data. It turns out that the atmospheric parameters albedo A, limb darkening coefficient u and gravity darkening coefficient y have only a minor influence on the results. Therefore,u[A simple linear law of the kind I(μ)/I(1) = 1 - u (1-μ) is used. Here, I(1) is the specific intensity at the centre of the stellar disk, and μ = cosγ, where γ is the angle between the line of sight and the emergent radiation.] and y are interpolated in the tables of Claret & Bloemen (2011) [using their results based on ATLAS model, least squares calculations, adopting solar chemical composition, the surface gravity of a normal F2main sequence star and no microturbulence; for details, see Claret & Bloemen (2011)] at the temperature of 7 040 K as determined by Kordopatis et al. (2013) for the primary star. Similarly, u and yfor the secondary component refer to a temperature of 4 500 K (determined from preliminary model fits)and a surface gravity calculated from the mass and radius of a main sequencestar of that temperature. According to Rafert & Twigg (1980) hotter stars with radiative envelopes should have an albedo of A= 1.0 while for cooler stars with convective envelopes A= 0.5. I adopt the latter value for the secondary star of V1129 Cen. The temperature of the primary falls in the transition region between the two regimes. For simplicity, I adopt A_1 = 0.75.Furthermore, the primary temperature is fixed to the above mentioned value of T_1 = 7 040 K. A phase shift to make up for a possible slight error of the epoch of primary eclipse was fixed to the value determined in preliminary calculations. Any contribution of an accretion disk and/or hot spots possibly present in the system was ignored.The model parameters left free to be adjusted to the data where thenthe mass ratio q of the components, the orbital inclination i,the temperature T_2and the dimensionless surface potential Ω_2of the secondary star. β Lyr stars are binaries in a tight orbit, but it is not always evident if they are detached or semi-detached. Therefore, calculations for both cases were performed, choosing the corresponding mode of the Wilson-Devinney code. In the latter case the surface potential ofthe primary star is determined by the mass ratio which defines the potentialat the Roche surface. In the alternative case no limitationson the size of the components relative to their Roche lobes is assumed andthe surface potential of the primary was also left free to be adjusted. Finally, the normalization constant was also considered a free parameter.It turned out that the best fit parameters of the detached model were not significantly different from those of the semi-detached model. Therefore, to be definite, I will subsequently only regard the results derived from the latter.The SIMPLEX algorithm (Caceci & Cacheris 1994) was adopted to find the optimal model parameters which lead to the minimal χ^2 between observations and calculations. Some outlying data points in the observed light curve (green dots in Fig. <ref>) were disregarded.The best fit model is shown in the central frame of Fig. <ref>as a red curve. The lower frame contains the differences between the observedand calculated data. The broken red line indicates the zero level in order to better visualize systematic deviations of the O-C curve from zero. The fit parameters are summarized in Table <ref>.Here, the Roche lobe filling factor of the secondary is calculated from its surface potential and is thus not an independent quantity. The model fit is not completely satisfactory. There are systematic residuals between data and fit. In particular, the fit appears to slightly underestimate the brightness after the primary minimum (phase range 0.1 < ϕ < 0.2 and overestimates it after the secondary minimum (0.6 < ϕ < 0.7). The formal deficiency of the fit is also evident from the elevated value of the reduced χ^2_ r,min = 2.9. Some ingredients are therefore probablymissing in the model. If the outbursts of V1129 Cen are indeed related to dwarf nova outbursts this is not surprising because an accretion disk (and possibly associated bright spots) are then expected to be present in the system. These cannot be modeled by theWilson-Devinney code.Even so, considering that the orbital modulation is evidently dominated the the ellipsoidal variations of the primary component together with asubstantial primary and a smaller secondary eclipse (all determined by the component temperatures, their relative sizes and the mass ratio), themissing model ingredients may demand small corrections, but the bestfit parameters should at least approximately reflect reality.Parameter correlations make it difficult to assign meaningful statistical errors to the parameter values. In order to investigate this issue Fig. <ref> shows two-dimensional cuts through theχ^2_ r hyperspace at the location of the best fit parameters.In order to facilitate comparison, the colours coding of all frames issuch that purple corresponds the χ^2_ r,min and dark red to twice that value or higher (see colour bar at the top of the figure). It is then seen that, for instance, the orbital inclination andthe mass ratio are strongly correlated (central frame in the upper row of Fig. <ref>) which makes it impossible to determine either of them with any degree of precision. Assuming as criterion that solutions leading to χ^2_ r > 2 χ^2_ r,min are unacceptable the diagram shows that the mass ratio can be anything from 0.39 up to a value beyond thelimits ofthe explored parameter range. However, other cuts through the χ^2_ r hyperspace permit to better restrict q (i.e., the q - T_2 plane; lower left frame of Fig. <ref>). Exploring the individual cuts in this way leads to permitted parameter ranges as quoted in the last columns of Table <ref>[The lower limit for the range of T_2 is ill defined because the Wilson-Devinney code issued warnings when T_2 < 4050 K. The corresponding model calculations were then ignored.]As a check, the Wilson Devinney model was also fit to the B light curve (red line in Fig. <ref>). Again, the fit is not perfect, exhibiting the same excess of observed light in the phase range after primary minium already seen in the V-band data. With the exception of the secondary star temperature which is higher by ≈700 K when the B band data are used, the best fit parameters listed inTable <ref> are practically identical to those derived from the V band. Even so, T_2 remains comfortably within the acceptable range. The agreement of the results obtained from data indifferent bands and using radically different observing procedures gives confidence that they are not corrupted by errors or systematics of the observations.The results of the model calculations nicely fit in with independentknowledge about V1129 Cen. Assuming the mass M_1 of the primary notto be significantly different from that of anormal F2 V star (∼1.55 M_⊙; Allen 1973) themass ratio and the orbital period together with Kepler's third law yield the component separation A. Using the approximation for the volume radius of the Roche lobe provided by Eggleton (1983), the assumption of a semi-detached configuration determines the radius R_1 of the primaryin units of A as a function of q. For the best fit value of q this results in a primary star radius of R_1 = 2.2 R_⊙. This is 1.7 times the radius of a F0 V star according to Allen 1973), confirming the conclusion drawn in Sect. <ref> that the star has evolved off the main sequence. Since the brightness scales with the square of the radius, the V1129 Cen primary should be1^.3ex m_.6ex.2 brighter that its main sequence equivalent. This is somewhat more than the values found in Sect. <ref>(0^.3ex m_.6ex.75) but not significantly so considering the uncertainty of M_V and q. § DISCUSSION The main issue concerning V1129 Cen is the question about the nature of the semi-periodic outbursts. Are these genuine dwarf nova type eruptions? If so, how can they come about in a system that otherwise appears to have a configuration different from normal cataclysmic variables?Outbursts of dwarf nova are caused by an increase of matter transferred through an accretion disk and the corresponding release of energy. This maybe due to a limit cycle in the disk which during quiescence is in a cool and low viscosity state. Matter transferred from adonor star increases the disk mass and its temperature until the temperature for hydrogen ionization is reached, resulting in an increased viscosity which causes the disk matter to be dumped on the central star(the thermal-viscous instability model; Lasota 2001). While this is the commonly adopted mechanism for dwarf nova outbursts,at least in some systems it seems not to workand there is evidence that instead an increased mass transfer from the companion star is responsible for the brightening of the accretions disk and thus the dwarf nova outburst [see the discussion in Baptista (2012)]. §.§ Outburst characteristicsWhat are the characteristics of the brightenings of V1129 Cen and how do they relate to those of normal dwarf nova outbursts? They last for about 40 days which is significantly longer thanobserved in most dwarf novae. However, as Szkody & Mattei (1984) and Gicger (1987) showed, there is a clear correlation between orbital period and outburst duration. Extrapolating the relation ofGicger (1987) to the period of V1129 Cen leads to 40.7 days in remarkable agreement with the average outburst duration measured in the ASAS-3 light curve. While most dwarf nova outbursts rise rapidly anddecline more slowly, the V1129 Cen outbursts are more symmetrical. This may also be a consequence of the long orbital period since outburst of other long period CVs show similar shapes [BV Cen, P_ orb=0.611 days,Bateson (1974); GK Per[GK Per is well known to be aclassical nova, but it also exhibits dwarf nova outbursts.],P_ orb=1.997 days, Pezzuto et al. (1996), Evans et al. (2009);V630 Cas, P_ orb=2.564 days, Shears & Poyner (2009)].Similarly, using a mean outburst interval of 347 days (from Fig. <ref>, assuming one unobserved outburst close to JD 2452400) V1129 Cen fits in very nicely in a linear relationship between the orbital period and the logarithm of the outburst intervals ofBV Cen interval: 150 days; Menzies et al. 1986, GK Per (∼ 1060 days;deduced from the AAVSO long term light curve, assuming some outbursts tohave been missed) and V630 Cas (17 years; Shears & Poyner 2019). While not irrefutable proof, the similarity of the outburst properties of V1129 Cen with those of long period dwarf novae suggest that the nature of the eruptions is similar. §.§ ScenariosIf the bright states in V1129 Cen are in fact dwarf nova typeoutbursts there must be an accretion disk somewhere in the system. Inprinciple, at least three possible scenarios can be envisaged: (1) The source of the outbursts is accidentally in the line of sight to V1129 Cen but not physically related to it; (2)V1129 Cen is not a simple binary star but a quadruple formed by two pairs, i.e., the dominating β Lyr type component which has a normal dwarf novaas a companion at a distance where the evolution of either system does not interfere with the other one; (3) the mass gainer in V1129 Cen is surrounded by an accretion disk as modeled, e.g., in the case of the prototype β Lyr byMennickent & Djurašević (2013). The first possibility can, or course, not be excluded, but it is quite unlikely considering the low space density of dwarf novae. Therefore, I will not consider it further. §.§.§ A hierarchical quadruple systemExploring the second scenario, I first remark that the considerations of Sect. <ref> about correlations between theoutburst duration, shape and intervals, and the orbital period areirrelevant in this case because the observed period of V1129 Cen is then not that of the dwarf nova. The question may be asked whether this scenario is compatible with basic evolutionary considerations. Are the evolutionary time scales of the β Lyr star and the dwarf nova compatible with their co-existence in a single multiple starsystem? This comes down to the question if the progenitor of the white dwarf in the dwarf nova system was more massive than the F2 star because in that case the former had enough time to go through a common envelope phase and become a cataclysmic variable while the F star still remains on or close to the main sequence. Several initial – final mass relations for white dwarfs have been published in the literature. Let M_ prog be the mass of the progenitor of a white dwarf of mass M_ WD. Using any of the relations given by Zhao et al. (2012), Salaris et al. (2009) or Catalan et al. (2008), the requirement that M_ prog > 1.55M_⊙[i.e., the mass of a F2 V star according to Allen (1973)] leads to M_ WD > 0.60M_⊙. This holds for single stars. As Ritter (2010) points out, in binaries the mass transfer sets a premature end to the nuclear evolution of the donor star. Therefore, the resulting white dwarf mass is smaller than in the case of single starevolution. The mean mass of white dwarfs in CVs is 0.83 M_⊙(Zorotovic et al. 2011). Thus, there is ample space for the progenitor to have a mass high enough toevolve into a red giant and to initiate the common envelope phase which results in the formation of a CV before the main component, i.e., the F star, leaves the main sequence.On the other hand, the observed outburst amplitude, while not rendering this scenario impossible, casts some doubt upon it. Fig. <ref> shows that the amplitude of different outbursts range between0^.3ex m_.6ex.6 and 0^.3ex m_.6ex.4. To be definite, the mean of the extremes,0^.3ex m_.6ex.5, will be adopted here.Remembering that the absolute magnitude of the dominating F2 star inV1129 Cen isM_V = 2^.3ex m_.6ex.18 (see Sect. <ref>) and assuming that the secondary of the β Lyr type system and the quiescent dwarf nova contribute negligibly to the total light, the magnitude of the outbursting light source should then be∼0^.3ex m_.6ex.6 fainter than the F star. Thus, its absolute magnitude is∼2^.3ex m_.6ex.8. Together with the conservative limit for the magnitude difference between the entire system and the accretion disk derived in Sec. <ref> this means that the dwarf nova should have an outburst amplitude of at least-4^.3ex m_.6ex.4. This is an uncomfortably high value. Amplitudes as large as this are moretypical for superoutbursts of SU UMa stars than for normal dwarf nova outbursts. But the ASAS long term light curves does not show evidence of the dichotomy between normal and superoutbursts characterizing those systems.Moreover, the absolute magnitude of the outbursting light source must also be quite high in this scenario. In normal dwarf novae the absolute V bandoutburst magnitude M_ max increases with the orbital period P_ orb, reflecting the larger size of the accretion disk in systems with longer periods. The relationship given in Eq. 13 of Warner(1987) results in a range of 4.96 ≥ M_ max≥ 3.15 for P_ orb between 1.5 h and 10 h.This holds for an average inclination of 57^.3ex o_.6ex.7 of the accretion disk. Assuming i=74.7 (Sect. <ref>) and the inclination correction of Paczyński & Schwarzenberg-Czerny (1980)the disk is expected to be even 1^.3ex m_.6ex.08 less luminous and thus much fainter than the lower brightness limit estimated above. Therefore, the dwarf nova companion to the β Lyr type binary in the V1129 Cen must have rather extreme properties compared to an average dwarf nova for this scenario to be viable.§.§.§ Outbursts within the β Lyr type binaryTurning to the third scenario, I assume that V1129 Cen consists of only two stellar components, one of which is surrounded by an accretion disk. In fact, similar models have successfully been adjusted to the light curves of several β Lyr type systems: AU Mon (Djurašević et al. 2010),V393 Cen (Mennickent et al. 2012), V455 Cyg (Djurašević et al. 2012),OGLE 05155332-6925581 (Garrido et al. 2013) and the prototype β Lyritself (Mennickent & Djurašević 2013). In all of these the disk revolves around the optically dominating primarystar which is thus the mass gainer, receiving matter from a Roche lobe fillingsecondary star of lower mass. However, in the quoted examples the binary is always much hotter and more massive than in thecase of V1129 Cen, harbouring primary stars of spectral type O and B and masses ranging from 7M_⊙ to 13M_⊙. Moreover the accretion disks are all extremely massive, geometrically and optically thick, and hot. The usual disk instability type mechanism for dwarf nova outbursts cannot work in such disks.But what about the alternative case where the optically dominating star is the mass donor and the accretion disk revolves around the companion? This configuration would be similar to that of a normal cataclysmic variable with the difference that the donor would have a much earlier spectral type than any other CV. In order to evaluate the consequences of this picture I regard the results of the model calculations ofSect. <ref>. Although they provideformal values for Ω_2 and T_2, these cannotbe used to draw conclusions on the nature of the secondary star (here: the mass gainer).The Wilson-Devinney code assumes the secondary to be a spherical object(distorted by the Roche potential). If the companion to the F2 star is inreality a star surrounded by an accretion disk, the fit parameters referring to the secondary will therefore represent an ill defined mixture of stellar and accretion disk parameters. However, this does not affect the brightness of the components. Adopting thebest fit parameters the model calculations show that the primary component(here: the mass looser) is ∼160 times brighter than the secondaryat phase 0.25. This corresponds to a magnitude difference of-5^.3ex m_.6ex.5, compatible withthe minimum magnitude difference derived from the absence of obserable flickering (see Sect. <ref>).The stark brightness contrast between the components also demands a high S/N ratio in order to detect the contribution of the expected emission linesfrom the accretion disk in the spectrum (provided that the disk is in a low vicosity state; emission lines in the bright, high viscosity state tend to beweak or even replaced by absorptions). Measuring the ratio of the flux at the top of the Hβ emission line to the flux of the surrounding continuum in the spectra of CVs reproduced by Zwitter & Munari (1995, 1996)yields a maximum of ∼5. Taking this as an upper limit for the corresponding ratio in the supposed accretion disk in V1129 Cen, a S/N ratio of at least 32 is then required for a spectrum to exhibit a trace of a Hβ emission. Thus, the absence of an emission core in the Hβ absorption line in Fig. <ref> is not incompatible with the idea of a dwarf nova-like accretion disk around the secondary component.While the large outburst amplitude derived inSect. <ref> may still represent a certain problem, this is different for the absolute magnitude of ∼2^.3ex m_.6ex.8 of the outbursting light source. Provided that an extrapolation of Eq. 13 ofWarner (1987) [Warner (1995) restricts its validity toorbital periods ≤15^.3ex h] to the period of V1129 Cen (21^.3ex h_.6ex.4) does not lead to an excessive error, the accretion disk in outburst may be even as bright as1^.3ex m_.6ex.3.The fact that in this scenario the mass donor is of significantly earlier spectral type than in any other known CV does not invalidate it. In recentpopulation synthesis calculations for CVs, Goliasch & Nelson (2015) explicitly took into account the nuclear evolution of high mass donor stars. They show that CVs with donor star masses corresponding to early F stars can form, in particular if the donor has already evolved off the main sequence. At first glance, the fact that all values of q within the acceptablerange quoted in Tab. <ref> lead to a donor starmass significantly in excess of that of the mass gainer appears problematic since this is in contrast to normal CVs where the mass donor is alwaysless massive than the mass gainer. But also in this case the calculations ofGoliasch & Nelson (2015) indicate that the donor can be much more massive than the gainer, in particular if it is evolved. However, the stability and the rate of mass transfer may then become an issue. Can it be kept low enough for the disk to remain in a quiescent low state prone to dwarf nova outbursts? Using the same stellar evolution code as Goliasch & Nelson (2015),Kalomeni et al. (2016) have calculated a dense grid of evolutionary tracks for binaries with white dwarf primaries. In their Fig. 16 they plot as a function of the donor star mass and the orbital period the ratio Ṁ/Ṁ_ crit of the mass transfer rate Ṁ and the critical transfer rate Ṁ_ crit above which the accretion disk is stable against the thermal-viscous instability. The latter is based on the stability criterion of Lasota (2001). The figure shows that at the orbital period of V1129 Cen and a donor star mass close the that of an unevolved (or only slightly evolved) early F type star configurations with Ṁ/Ṁ_ crit < 1 occur. Thus, dwarf nova type outbursts are possible. However, a problem arises from the large dimensions of the system which implies a high critical disk mass transfer rate Ṁ_ critfor outbursts to occur due a thermal-viscous disk instability and, inconsequence, a high outburst luminosity. Smak (1983) provides anexpression for Ṁ_ crit. I neglect the small correction factor involving the the ratio between the white dwarf radius and the disk radius R_ d and follow Osaki (1996) adopting the expression logT_ eff,crit = 3.9 - 0.1 log(R_ d/10^10 cm) for thecritical disk temperature and R_ d = 0.35 A, where A is the component separation. A is calculated from Kepler's third law, using the typical mass of an F0 V star and the mass ratio as quoted inTable <ref>. The critical mass transfer rate for a disk instability to occur is then ∼1.8 × 10^-7M_⊙/y. This leads to an approximate lower limit of the outbursting disk luminosity of L_ d,o = G ṀM_ WD/R_ WD = 637L_⊙ where G is the gravitational constant and the white dwarf radius R_ WD has been calculated from its mass and the mass-radius relation of Nauenberg (1972). On the other hand, interpolation in the tables of Allen (1973) and allowing for the larger radius due to evolution leads to a luminosity of 12.6L_⊙ for the F-star in V1229 Cen. Since the bolometric and the visual magnitude difference between the two systemcomponents will not be grossly different, the outbursting accretion diskshould outshine the F-star by more than 4 magnitudes in visual light,in contrast to what is observed. This appears to be a serious problem if the outbursts are expected to bedue to a thermal-viscous instability. Assuming the alternative, a temporarily enhanced mass transfer from the donor star, it obviously vanishes. Moreover, within the scenario of a hierarchical quadrupole system it is, of course, also not existent.§ CONCLUSIONSBased on its optical light curve V1129 Cen has been classified as aβ Lyr type system. It distinguishes itself from other members of this class by quasi-periodic eruptions, suggesting a relationship of the star with dwarf novae. Here, I investigated its light curve in some detail in order to either substantiate or reject this relationship.Based on model calculations and comparisons with cataclysmic variables it is concluded that the properties of V1129 Cen are not in contradiction with the hypothesis that the system either contains, or that it constitutes a dwarf nova, albeit with rather extreme characteristics. In the first case V1129 Cen would be a hierarchical quadruple system,formed of two pairs, the optically dominating of which being a normalβ Lyr type variable. The second pair would be an ordinary dwarf nova. However, while this scenario cannot a priori be discarded, such aconfiguration appears to be rather artificial. Alternatively, V1129 Cen may consist of a Roche-lobe filling, slightly evolved F2 star which loses mass via an accretion disk to a companion star; i.e., a cataclysmic variable with an unusually early type mass donor. The high brightness of the F star is able to completely outshine the accretion disk and the mass gainer (except during the occasional outbursts) such that the normal photometric or spectroscopic hallmarks of CVs are not detected. A possible problem with this scenario arises if the outbursts are due to a thermal-viscous instability (as opposed to a temporarily increased mass transfer from the donor star) because then the accretion disk should becomemuch brighter than observed.As a caveat I stress that this does not mean that the nature of V1129 Cen is elucidated beyond doubt. I have shown that its properties are compatible with the hypothesis that the system is a dwarf nova with a very early type mass donor, but it would be premature to reject just for this reason another configurationfor the star and alternative explanations for the outbursts.§ ACKNOWLEDGEMENTS I gratefully acknowledge the use of the ASAS-3 database which provided valuable supportive information for this study. § REFERENCES -0.5ex * Allen, C.W. 1973, Astrophysical Quantities, third edition(Athlone Press: London)* Baptista, R. 2012, Mem.S.A.It., 83, 530* Bateson, F.M. 1974, Publ. Var. Star Sect., RASNZ, 2, 1* Brown, A.G.A, Vellenari, A., Prusti, T., et al. 2016, A&A, 595, A2* Bruch, A. 1993,MIRA: A Reference Guide (Astron. Inst. Univ. Münster* Bruch, A. 2016, New Astr., 46, 90* Bruch, A. 2017a, New Astr., 52, 112* Bruch, A. 2017b, New Astr., in press* Bruch, A., & Diaz, M.P. 2017, New Astr., 50, 109* Caceci, M.S., & Cacheris, W.P. 1994, Byte, May 1984, 340* Catalán, S., Isern, J., García-Berro, E., & Ribas, I.2008, MNRAS 387, 1693* Claret, A., & Bloemen, S. 2011, A&A, 529, A75* Durašević, G., Latković, O., Vince, I., & Cséki, A. 2010, MNRAS, 409, 329* Durašević, G., Vince, I., Antokhin, I.I., et al. 2012, MNRAS, 420, 3081* Eastman, J., Siverd, R., & Gaudi, B.S. 2010, PASP, 122, 935* Eggleton, P.P. 1983, ApJ, 268, 368* Evans, P.A., Beardmore, A.P., Osborne, J.P., & Wynn, G.A. MNRAS, 399, 1167* FitzGerald, M.P. 1970, A&A, 4, 234* Garrido, H.E., Mennickent, R.E., Durašević, G., et al. 2013, MNRAS, 428, 1594* Gicger, A. 1987, Acta Astron., 37, 29* Goliasch, J., & Nelson, L. 2015, ApJ, 809, 80* Harmanec, P., & Scholz, G. 1993, A&A, 279, 131* Hoffman, D.J, Harrison, T.E., Coughlin, J.L., et al. 2008, AJ, 136, 1067* Høg, E., Fabricius, C., Makarov, V.V., et al. 2000, A&A, 355, L27* Houk, N. 1978, Catalogue of two dimensional spectral types for the HD stars, Vol. 2, University of Michigan* Jacoby, G.H., Hunter, D.A., & Christian, C.A. 1984, ApJ Suppl., 56, 257* Kalomeni, B., Nelson, L., Rappaport, S., et al. 2016, ApJ, 833, 83* Kazarovetz, E.V., Samus, N.N., Durlevich, O.V., Kireeva, N.N., & Pastukhova, E.N. 2008, IBVS 5863* Kilkenny, D., & Laing, J.D. 1990, SAAO Circ., 14, 11* Kordopatis, G., Gilmore, G., & Steinmetz, M. 2013, AJ, 146, 134* Lasota, J.-P. 2001, New Astron. Rev., 45, 449* Mennickent, R.E., & Djurašević, G. 2013, MNRAS, 432, 799* Mennickent, R.E., Durašević, G., Kołaczkowski, Z., & Michalska, G., 2012, MNRAS, 421, 862* Menzies, J.W., O'Donoghue, D., & Warner, B. 1986, ApSS, 122, 73* Nauenberg, M. 1972, ApJ, 175, 417* Osaki, Y. 1996, PASP, 108, 39* Pezzuto, S., Bianchini, A., & Stagni, R. 1996, A&A, 312, 865* Paczyński, B., & Schwarzenberg-Czerny, A. 1980, Asta Astron, 30, 127* Pojmanski, G. 2002, Acta Astron, 52, 397 * Rafert, J.B., & Twigg, L.W. 1980, MNRAS, 193, 79* Ritter, H. 2010, Mem. S.A.It., 81, 849* Ritter, H., & Kolb, U. 2003, A&A, 404, 301* Salaris, M., Serenelli, A., Weiss, A., & Miller Bertolami, M. 2009, ApJ, 692, 1013* Shears, J., & Poyner, G. 2009, JBAA, 120, 169* Smak, J. 1983, ApJ, 272, 234* Szkody, P., & Mattei, J.A. 1984, PASP, 96, 988* Walter, F., Bond, H.E., & Pasten, A. 2006, IAU Cir. 8663* Warner, B. 1987, MNRAS, 227, 23* Warner, B. 1995, Cataclysmic Variable Stars, Cambridge University Press, Cambridge* Wilson, R.E. 1979, ApJ, 234, 1054* Wilson, R.E., & Devinney, E.J. 1971, ApJ, 166, 605* Zacharias, N., Finch, C.T., Girard, T.M., et al. 2013, AJ, 145, 44* Zhao, J.K., Oswalt, T.D., Willson, L.A., Wang, Q., & Zhao, G. 2012, ApJ, 746, 144* Zorotovic, M., Schreiber, M.R., & Gänsicke, B.T. 2011, A&A, 536, A42* Zwitter, T., & Munari, U. 1995, A&AS, 114, 575* Zwitter, T., & Munari, U. 1996, A&AS, 117, 449
http://arxiv.org/abs/1706.08425v1
{ "authors": [ "Albert Bruch" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170626150313", "title": "Orbital variations and outbursts of the unusual variable star V1129 Centauri" }
e.g. i.e. w.r.t. et al.
http://arxiv.org/abs/1706.08670v2
{ "authors": [ "Xiaojian Du", "Min He", "Ralf Rapp" ], "categories": [ "hep-ph", "nucl-ex", "nucl-th" ], "primary_category": "hep-ph", "published": "20170627050443", "title": "Color Screening and Regeneration of Bottomonia in High-Energy Heavy-Ion Collisions" }
Division of Materials theory, Department of Physics and Astronomy, Uppsala University, Box 516, 75121 Uppsala, SwedenDivision of Materials theory, Department of Physics and Astronomy, Uppsala University, Box 516, 75121 Uppsala, Sweden School of Science and Technology, Örebro University, SE-70182 Örebro, SwedenDivision of Materials theory, Department of Physics and Astronomy, Uppsala University, Box 516, 75121 Uppsala, Sweden Multiscale Materials Modelling, Department of Materials Science and Engineering,Royal Institute of Technology, SE-10044 Stockholm, SwedenDivision of Materials theory, Department of Physics and Astronomy, Uppsala University, Box 516, 75121 Uppsala, SwedenMultipolaron solutions were studied in the framework of the Holstein one-dimensional molecular crystal model. The study was performed in the continuous limit where the crystal model maps into the nonlinear Schrödinger equation for which a new periodic dnoidal solution was found for the multipolaron system. In addition, the stability of the multi-polaron solutions was examined, and it was found thatcnoidal and dnoidal solutions stabilize in different ranges of the parameter space. Moreover, the model was studied under the influence of nonlocal effects and the polaronic dynamics was described in terms of internal solitonic modes. laterMulti-polaron solutions, nonlocal effectsand internal modes in a nonlinear chain M. Pereiro December 30, 2023 ================================================================================== Nonlinear phenomena are ubiquitous to our everyday experience and among them are hydro- or magnetohydrodynamics, plasma physics, oceanography, metereology as well as new areas as nonlinear optics, elementary particle physics and condensed matter physics  <cit.>. During the last decades, it has widely been recognized in many areas ofphysicsthat nonlinearity can turn into a fundamentally new phenomena which cannot be constructed via perturbation theory and new mathematical tools are required. The polaron concept which describes a carrier interacting with the lattice vibrational degrees of freedom is undoubtedly a good illustration of this principle <cit.>. This fascinating object is the fundamental interest for both physics and mathematics, indeed fortifying the basal relationship of the two disciplines <cit.>. In earlier works,it has been shown that the tight-binding polaron Hamiltoniancan be mappedinto aNonlinear Schrödinger Equation (NLSE) <cit.>. The tree of NSLE solutions ranges from plane waves toJacobi elliptic functions  <cit.>. Numerical simulations ofthe NLSE reportedcomplex behavior of the system which at certain parameters stabilizes as periodic cnoidal-like waves  <cit.>. Remarkably the chaotic andstochastic behaviour reportedfor this class of systems <cit.> refersto the famous Fermi-Pasta-Ulam <cit.> problem. In contrast to the majority of early theoretical models considering a local character of the electron-phonon effects, there have recently been studies reporting on 1D single polaron motion <cit.> that exhibit a high impact of the nonlocal nonlinear effects within Holstein's molecular crystal model. Thus, in the case of strong nonlocal nonlinearity, new states can emerge as, for example, chaoticons which exhibit both chaotic and soliton-like properties <cit.>. Moreover, external perturbations induced by nonlocal effects also destroy the complete integrability of the NLSE and consequently, the elastic nature of the interaction among solitons breaks down. The inelastic collision, as described in Klein-Gordon type models, can be explained in terms of the internal solitonic mode excitations <cit.>. In condensed matter physics, the internal modes can be attributed to phonons coupled to the localized electron state <cit.>.In this letter, we present an analysis ofthe solution hierarchy of the continuous NLSE in the case of the 1D polaron model. We report on modulation instability of periodic solutions and analyze the families of solutions obtained using the G'/G expansion method <cit.>. Modelling the systembehaviour with a finite, extended nonlocalityterm, highlights the behaviour of the nonintegrable model and its nobel dynamics based on the excitation of internal modes of the solitonic solutions.We start from the electron-lattice Hamiltonian in the frame of the Holstein molecular-crystal model <cit.>: H=-j ∑_n a_n^† (a_n+1 +a_n-1) -∑_n 1/2M ω^2_0x_n^2-g∑_n x_n a_n^† a_n +∑_n W_n a_n^† a_nwhere a_n^† anda_n denote electronic creation and annihilation operators of the n-th site, respectively. The first term stands for electrons hopping between lattice sites, in a tight-binding description, with the nearest-neighbor overlap integral j. The second term describesthe lattice part of the Hamiltonian in the adiabatic limit. In this description each nucleus with mass M, harmonically oscillate around the stationary mass center with Einstein frequencyω_0and deviationx_n,estimated with respect to the equilibrium interatomic separation. The third term describes the electron-lattice interaction with the characteristic coupling constant g.Finally, thenonlocal term W_n(x_1,..., x_n) is assumed to be taken in the form of thePöschl-Teller potential <cit.>.FollowingHolstein's seminal paper <cit.>,the full electronic wave function is expressed as|Ψ_e⟩=∑a_n|n⟩ and hence the electronic amplitude is defined as a_n=⟨n||Ψ_e⟩. Reformulated in the electronic amplitudes and minimised with respect to the ionic displacements near its equilibrium point for a nonlocal model as described by Eq. <ref>, the vibration coordinates are 𝒳_n=Υ_n |a_n|^2 with Υ_n=g-W'_n/Mω^2_0 <cit.> (W'_n stands for a partial derivative over n). In thecontinuum limit, the Schrödinger-type eigenvalue problem, in Eq. (<ref>),can be mapped into a NLSE-type of equation (for further details see Supplementary Note S1 <cit.>):j∂^2 𝔞_n/∂ n^2+gΥ_n | 𝔞_n|^2𝔞_n-(ε +W_n) 𝔞_n=0wherethe energy of the localized electron ε =-ℰ+1/2∑ Mω_0^2𝒳_n^2-2j is defined in terms of the minimized total energy ℰ of the 1D chain. Notice that 𝔞_n represents the continuous extension of the electronic amplitude a_n. Index n in this caseindicatesthat 𝔞_n is a function of n, a continuous variable. In the absence of nonlocality (W_n=0), Eq. <ref> has several hierarchies of solutions like the self-trapped solitonic solution in the case of electronic states decaying at infinity (𝔞_n→ 0 as n→∞) <cit.>,non-decaying Bloch-like solutions <cit.> and other periodic and solitonic solutions that are derived in Supplementary Note S2 by using the G'/G method <cit.>. Among the periodic solutions, it is worthy to emphasize the solutions given by Jacobi elliptic functions as the already reported cnoidal solution <cit.> and also, a previously not discussed, dnoidal solution: a^(cn)_n =𝔪^1/2ζ^(cn)/(2σ)^1/2 [ ζ^(cn)n,𝔪 ]; ζ^(cn)=(ε/j)^1/21/|2𝔪-1|^1/2 a^(dn)_n =ζ^(dn)/(2σ)^1/2[ ζ^(dn)n,𝔪 ];ζ^(dn)=(ε/j)^1/21/(2-𝔪)^1/2where σ=g^2/4Mω_0^2j. The parameter 𝔪∈ [0,1] is equal to the square of the modulus of the elliptic function (superscripts (cn) and (dn), hereafter, denote quantities describingcnoidal and dnoidal solutions, respectively). Additional information about the derivation of the new, dnoidal solution is included in Supplementary Note S3. By using the normalisation condition ∫|a_n|^2dn=1 and introducing σ̃=g^2 K/8Mω_0^2 η, where η stands for half of the chain length,the energy of the localised electron for the two solutions results in (Fig. <ref>a),b)):ε^(cn) = σ̃2𝔪-1/E-𝔪'K;ε^(dn) = σ̃2-𝔪/E. Hereandrepresent the elliptic integral of the first and second kind, respectively while 𝔪^'=1-𝔪. Both solutions have limits where the main function collapses either into a harmonic function (Jacobi cn(u,0)) and constant(Jacobi dn(u,0)) or into a solitonic solution (𝔪=1). Moreover, both solutions converge to the multi-noninteracting soliton solution at 𝔪→ 1  <cit.> among which the charge carrier has been spread over.In order to further clarify the behavior of the multipolaronic system we suggestthe following stability analysis of thecnoidal and the new, dnoidal solution. The starting point is the local, time-dependent analogue of Eq. <ref> :iħ∂𝔞_n/∂ t+j ∂^2 𝔞_n/∂ n^2+gΥ | 𝔞_n|^2𝔞_n-𝒲𝔞_n=0. The parameter Υ= g/mω_0and the last term on the left hand side of Eq. <ref>  stands for an external homogeneous potential (see Supplementary Note S4). According to the Lyapunov's Direct Methodthe asymptotic stability of a dynamical system can be examined by applying a weak perturbationto the linearizedsystem near its equilibrium. Guided by this method, we consider the perturbed solution of Eq. <ref>  in the form of a traveling wave functionA(𝔣(ξ)+ϕ_1(ξ,τ)+iϕ_2(ξ,τ))e^i(A^2-k^2)τ+ikx. In this description A is the wave amplitude,2k has the meaning of the wave velocity, ξ=A(x-2kτ) and τ=j/ħt are new spatial and time variables. In the ansatz,𝔣(ξ) stands for an unperturbed, periodic kernel which solves the stationary NLSE, Eq. <ref>. The perturbation is exponentially factorized by time as ϕ_1,2(ξ,τ)=ϕ_1,2(ξ)e^A^2θτ, where θ denotes the instability increment of the system. To calculate the instability increment we have employed a methodology previously suggested in plasma physics in order to study modulation instability of the periodic waves  <cit.>. The stationary part of the perturbation is considered in the form of the Bloch waves ϕ_1,2(ξ)= ∑_q𝔣(ξ)e^iqξ. Moreover, the periodic kernel in this description is given in the form 𝔣(ξ) = ∑_nC_ne^inq_0ξ, where C_n are the coefficients of the Fourier expansion. We have also introduced parameter Q=q/q_0, where q and q_0 are main numbers of Bloch envelop and periodic solution 𝔣(ξ), respectively. The stability analysis, in the framework of this method has been performed in terms of the infinite dimensional matrix Θ_mn <cit.> which satisfiesthe following eigenvalue problem ∑_nC_n Θ_mn≡ -θ^2 C_m. Therefore, -θ^2 with respect to 𝔪 and Q parameters forms a band structure. The condition of the stability in terms of the matrix eigenvalues is satisfied when -θ^2 ∈ℝ^+. In case of-θ^2 ∈ℝ^- or -θ^2 ∈ℂ the system instability exponentially diverges with time (see also Supplementary Note S4).Based on the obtained -θ^2(cn) and -θ^2(dn), the stability of both solutions is described as follows: in the case of 𝔪≲ 0.24 only cnoidal solution remains stable (see Fig. <ref>c)and Fig. <ref>e)). In this region all branches of -θ^2(cn) remain real and positive, however lower -θ^2(dn) branches lie in the region of negative values (Fig. <ref>f)). Moreover the energies of a localized electron for cnoidal solution, in this region of 𝔪 are negative (Fig. <ref>a)) and significantly lower than those for dnoidal solutions (Fig. <ref>b)). In the range of 𝔪≳ 0.24, theimaginary part of -θ^2(cn) diverges at both values of the parameter Q (at Q=0.1 there are two branches and at Q=0.5 only one branch, more detailed dispersion curves for -θ^2(cn) and -θ^2(dn) are presented in Supplementary Note S5). In this parameter region, the cnoidal solution is unstable. Around the pole at 𝔪 =0.5 the instability exhibits exponential growth. At the same time at critical 𝔪=0.5the electron localization energies ε^(cn) change sign and pass into the region of positive values. It is notable that the dnoidal solution has no singular pointsthroughoutthe whole range of 𝔪 parameters and can be considered as an universal solution. The next notable region is 𝔪≳ 0.75 where the real part of -θ^2(dn) (Fig. <ref>d) and Fig. <ref>f)) becomes positive and the dnoidal waves stabilize. Further, at 𝔪→ 1 the instability of the cnoidal solution alsomonotonically reduces and hence the periodic solutions converge to the robust stable soliton solution.Now, gradually increasing the complexity of the problem we consider a nonlocal form for the overlap integral andan inhomogeneous nonlocality. Using a single-site diatomic potential taken in the form of the Pöschl-Teller potential (Supplementary Note S1), the nonlocal term and hopping integral can be recast in the form W_n=-∑_p≠nγ_n^2 V_p ∫_-η^η^4(x-x_n/βa) ^2(x-x_p/βa) dxj_nm=-γ_n^2 V_n∫_-η^η^2(x-x_m/βa)/cosh^4(x-x_n/βa) dx where V_p is the height of the potential at site p, a is the lattice constant and β is the parameter accounting for the number of neighbouring shells over which the potential is spread over. Moreover, γ_n represents the maximum of the single-site electron wave function. Since the nonlocal term represents a small perturbation with respect to the rest of energy terms described in the hamiltonian of Eq. <ref>, it is reasonable to define an unique hopping constant for the whole system as j̃=⟨∑_δj_nδδ ^2⟩_n where j_nδ stands for the hopping to the arbitrary δ-th nearest neighbour with respect to n-site and ⟨...⟩_n denotes the average value over n-sites. Consequently, the extended time-dependent continuous nonlocal NLSE corresponding to Eq. <ref> reads asiħ∂𝔞_n/∂ t + j̃∂^2 𝔞_n/∂ n^2+gΥ_n | 𝔞_n|^2𝔞_n-(ε +W_n) 𝔞_n=0. The standard NLSE belongs to the class of completely integrable differential equations, for which an infinity of invariants or conservations laws can be obtained by using, for example, the inverse scattering method <cit.> or Lax Theory <cit.>. Fully integrability is a necessary condition to apply any of these methods to Eq. <ref>. A simple way to determine the integrability of Eq. <ref>  is based on the Painlevé test <cit.>. Passing thePainlevé test is necessary, but not sufficient condition for having the Painlevé property which is defined as the absence of movable critical points or singularities of the solutions of any ordinary differential equation (ODE). It was conjectured that any ODE satisfying the Painlevé property is also fully integrable <cit.>. A necessary condition for Eq. <ref> to pass the Painlevé test is that j(gΥ_n)^2=𝒜(t), where 𝒜(t) is a time-dependent function <cit.>. Equation <ref> does not meet this requirement because Υ_n is not only a time-dependent but also an spatial-dependent function via n and consequently,Eq. <ref> is nonintegrable. When the non-integrable perturbation W_n is small, the equation becomes nearly integrable and still can be solved analytically in a perturbative fashion <cit.>. In general, if the nonlocal term is big enough and perturbation theory is not applicable, a numerical method can be used instead to get the solution of Eq. <ref> and this is the choice adopted here.After numerically solving the extended time-dependent continuous NLSE for a 1D chain with periodic boundary conditions, we obtain the time-evolution of two localised Gaussian perturbations located at positions n=5 and n=-5 (Fig. <ref>). Thus, the initial condition for solutions to Eq. <ref> read as 𝔞_n(t=0)=1/2(e^-(n-5)^2+e^-(n+5)^2). With the aim to emphasize the influence of the nonlocal term, in Fig. <ref>a)-c) we plot solitonic and unfocused solutions for a moderately weak nonlocal term, respectively while in Fig. <ref>b)-d) we plot the same solutions but with W_n=0. We denote hereafter solutions with W_n=0 as standard solutions. Notice that we used the same material parameters for calculating both kind of solutions of Eq. <ref> with the only exception that for the unfocused solution, the nonlinear term was kept bigger(√(M)ω_0=3.87· 10^-2 eV^0.5Å^-1) than the one for the solitonic solution (√(M)ω_0=1.29· 10^-2 eV^0.5Å^-1). It is also worthwhile to mention that localized electron energy ε had the same numerical value, for the solution with and without nonlocal term, as indicated in Table <ref>. Mathematically this approximation is valid when nonlocal perturbations W_n are slowly changing in space, so that, the derivative of W^'_n is almost a constant. As shown in Fig. <ref>, the consequence of the nonlocal term is an asymmetry of the solution with respect to the spatial dimension. Moreover, the nonlocal solution clearly deviates from the standard solution as time evolves. For example, when nonlocal effects are included, the collision of solitons ceases at about 50 fs in Fig. <ref>a) while it takes longer time for the standard solution, around 300 fs. Thus, this demonstrates, that even for moderately small values of the nonlocal term, its effects are considerable, particularly for longer periods of time. This is also clearly shown inFig. <ref>c)-d) where in the time range from700 to 800 fs, the nonlocal solution gets more asymmetric than the standard one (see also Supplementary Note S6, Fig. S7). The asymmetry is corroborated through the Υ_n term in Eq. <ref> which has a functional dependence on W'_n, an oddfunction with respect to the spatial coordinate (Supplementary Note S7, Fig.  S8). The time evolution and consequently the asymmetry of the nonlocal solution depends strongly on minute variations of γ_n and V_p parameters (see Supplementary Note S8, Fig. S9).Further, analyzing the time evolution of both excitations in Fig. <ref>a)-b) one can see thatboth polarons oscillate with respect to the common center of mass and after some time repel each other. In general polaron-polaron interaction depends on the strength of the Pöschl-Teller potential and the distance between the polarons (Supplementary Note S9, S10). In our case in the region with atomic positions |n|>5 (see Fig. <ref>a)-b)) the excitations have an attractive interaction while in the inner region (|n|<5) the interaction between the excitations becomes repulsive. Thus, the excitationsturn into a bipolaron-like oscillating bound pair. The behaviour is typical for the biquadratic trinomial type of potential provided by the first integral of NLSE equation. In the case of the extended time-dependent continuousNLSE equation, the time evolution of the system is more complex. In this case, we suggest that the system admits internal modes (Supplementary Note S11). If the velocity or the energy of the polaronic excitation is large enough, it may happen that the energy during the collision is transferred to the internal mode. This situation gives rise to an inelastic collision, and consequently, the two excitations escape from their attractive potential well. Moreover, nonlocality adds additional effects to the behaviour of the system. Its presence, interpreted as an external perturbation, shortens the lifetime of the bound state as shown in Fig. <ref>a)-b). Since the internal modes refer to a localised solution of Eq. (<ref>), the energy in the polaron-polaron collision is preserved. Thus, if the collision excites the internal mode, a subsequent collision can de-excite it. The NLSE equation is time-reversal invariant <cit.>. Consequently, if both excitations come to a second collision in which the phases of the internal modes of both polarons are coherent to the phases in a previous collision, then the second collision will be the time reversal of the first collision. The second collision will cancel out the excitation of the internal modes so that both polarons will recover enough kinetic energy to escape from the attractive potential and break apart the bound state. In summary, we have studied multi-polaron solutions in the framework of the 1D Holstein model. We found that the periodic solutions can stabilize in thecertain range ofthe parameters. We emphasize the importance of the universal, dnoidal solution, which previously has not been discussed. Moreover,the Holstein molecular crystal model in the continual limit was extended and studied under the influence of nonlocal effects. Particularly, we have observed that nonlocal effects influence polaron-polaron collisions by inducing an inelastic scattering via the excitation of internal modes. We show that nonlocality forces the polaron dynamicsto develop the spatial asymmetry and delocalizes bound polaronic states earlier in time than the standard solution.N.V.S. acknowledges financial support of the Swedish Research Council (VR) (project 2014-5993). O.E. acknowledges support from the Swedish Research Council (VR) and the Knut and Alice Wallenberg (KAW) Foundation (grants 2013.0020 and 2012.0031). BoardmanA.D. Boardman, M. Bertolotti and T. Twardowski, eds., Nonlinear Waves in Solid-State Physics, Plenum, New York, (1986). BaroneA. Barone and G. Paterno, Physics and Applications of Josephson Effects, Wiley, New York, (1982). ChuiS.T. Chui andJ.D. Weeks,Phys. Rev. Lett. 40, 733 (1978). Neudecker B. Neudecker, Z. Phys. B: Condens. Matter 49, 57 (1982). Landau Landau, L.D., Pekar, S.I.: Zh. Eksp. Teor. Fiz. 18, 419 (1948). AlexandrovA. S. Alexandrov, ed., Polarons in Advanced Materials, Springer, Dordrecht, The Netherlands, (2007).BogolubovN. N. Bogolubov andN. N. Bogolubov Jr., Some aspects of polaron theory, World Scientific, Singapore, (1988).Davidov A. S. Davydov and N. I. Kislukha, Sov. Phys. JETP 44, 571 (1976). Holstein T. Holstein, Ann. Phys. 8, 325 (1959); ibid. 8, 343 (1959).Turkevich L.A. Turkevich, T.D. Holstein, Phys. Rev. B 35, 7474 (1987).Kopidakis G. Kopidakis, C. M. Soukoulis and E. N. Economou, Phys. Rev. B 51, 15038 (1995).DhillonH. S. Dhillon F. V. Kusmartsev and K. E. Kürten, J. Nonlinear Math. Phys. 8, 38 (2001). FermiE. Fermi, J. Pasta and S. Ulam, Studies of Nonlinear Problems, Document LA-1940, (1955). Vosika Z. Vosika, Z. Przulj, L. Hadzievski and Z. Ivic, J. Phys.: Condens. Matter 21, 275404 (2009). perroni C. A. Perroni, E. Piegari, M. Capone and V. Cataudella, Phys. Rev. B 69, 174301 (2004). lanhuaL. Zhong, Y. Li, Y. Chen, W. Hong, W. Hu and Q. Guo, Sci. Rep. 7, 41438 (2017).campbell D. K. Campbell, J. F. Schonfeld and C. A. Wingate, Physica D 9, 1 (1983). braun O. M. Braun and Y. S. Kivshar, Phys. Rep. 306, 1 (1998). wang M. Wang, X. Li,and J. Zhang, Phys. Lett. A 372, 417 (2008). teller S. Cruz y Cruz, S. Kuru and J. Negro, Phys. Lett. A 372, 1391 (2008). G. Pöschl and E. Teller, Z. Phys. 83, 143 (1933). SupM Supplementary Material Pavlenko V. Pavlenko and V. Petviashvili, Journal de Physique Colloques 40 (C7), 621-622 (1979). matrix In practice, to perform a numerical diagonalisation, a square matrix with a finite size is considered. In order to examine matrix size effects we have diagonalised 7x7, 15x15 and 33x33 matrixes andnoticed qualitatively similar results. Presented results were obtained over 15x15 matrix diagonalization.gardner C. S. Gardner, J. M. Greene, M. D. Kruskal and R. M. Miura, Commun. Pure Appl. Math. 27, 97 (1974). lax P. D. Lax, Commun. Pure Appl. Math. 21, 467 (1968). painleve M. J. Ablowitz, A. Ramani and H. Segur, Lett. Nuovo Cim., 23 333 (1978). drazin P. G. Drazin and R. S. Johnson, Solitons: An Introduction (Cambridge University Press, Cambridge, 1996). ozemirC. Özemira and F. Güngör, Rev. Math. Phys. 24, 1250015 (2012). amin A. Chabchoub and M. Fink, Phys. Rev. Lett. 112, 124101 (2014). Multi-polaron solutions, nonlocal effectsand internal modes in a nonlinear chain —Supplementary Material—Supplementary Note S1: nonlocal extensions of theHolstein Molecular Crystal model in the continuous limit.Following Holstein's seminal paper <cit.>, we reformulatethe model starting fromthe site-diagonal Hamiltonian: H=H_el+H_lat+H_el-lat+H_n-loc,whereH_el = -j ∑_n a_n^† (a_n+1 +a_n-1),H_lat = ∑_n ( p^2_n/2M+ 1/2M ω^2_0x_n^2),H_el-lat =- g∑_nx_n a_n^† a_n ,H_n-loc = ∑_nW_n a_n^† a_n.The first term H_el describes tight-binding electrons with the nearest-neighbor overlap integral j. The secondterm of the Hamiltonian, H_lat, describes 1D lattice ofN identicaldiatomicmolecules with mass Mand momentum operator p_n≡(ħ/i)∂/ ∂ x_n.Nucleusharmonically oscillate around the stationary mass centerwithfrequencyω_0and deviationx_n, with respect to the equilibrium interatomic separation. In the zero-order adiabatic approach, assumed in the present work, thevibrational term only remains to be considered. The next term, H_el-lat, stands for the electron-lattice interaction with the characteristic coupling constant g. Finally, thenonlocal term W_n(x_1,..., x_n), in a simple picture, is assumed to be taken in the form of thePöschl-Teller potential <cit.>. This term represents the perturbation on site n due to the presence of the other atomic sites. It can be calculated, in the continuum limit, as the following Coulomb integral  <cit.>:W_n(x_1, ...,x_n)=∫|ϕ(x-na,x_n)|^2∑_p≠ n U(x-pa,x_p)dxwhere ϕ_n≡ϕ(x-na,x_n) are the “single-site” atomic electron wave functions, U is the single-site atomic potential and a is the lattice parameter. As commented above, the atomic potential can be modelled by using the Pöschl-Teller potential given by:U(x-pa,x_p)=- V_p/cosh^2(x-x_p/β a)where V_p is the height of the potential, a is the lattice constant and β is the parameter accounting for the potential overlapping with nearest neighbours.For the single-site electronic wave function, we used a localised function as:ϕ(x-na,x_n)=γ_n^2(x-x_n/β a)Here, γ_n represents the maximum of the wave function. For simplicity, we used the same β parameter for both U and ϕ since they are related to the overlapping of the electron wave function and this is precisely what the W_n term is meant for. Consequently, it is expected that W_n will be proportional to β. By inserting Eqs. (<ref>)-(<ref>) in Eq. (<ref>), the nonlocal term W_n can be recast in the form:W_n=-∑_p≠ nγ_n^2 V_p ∫_-η^η^4(x-x_n/β a) ^2(x-x_p/β a) dxwhere η represents half of the size of the 1D system, i.e. half of the number of diatomic molecules. The overlap integral between neighbouring diatomic molecules is defined, in the continuum limit, as:j(x_n,x_m)≡∫ϕ^*(x-na,x_n) U(x-na,x_n) ϕ(x-ma,x_m) dxUsing the same picture as described above and assuming for simplicity that γ_n=γ_m, then the hopping integral can be recast in the following form:j(x_n,x_m)=-γ_n^2 V_n∫_-η^η^2(x-x_m/β a)/cosh^4(x-x_n/β a) dxIn order to ensure that the boundary conditions of the chain of diatomic molecules are periodic, we take a finite chain in the range [-η,η] but we still allow interaction of the edge molecules with neighbouring atoms outside of the chain.The general Hamiltonian, as defined in Eq. (<ref>),projected onto a single-electron statesolves the following eigenvalue problem:ℰa_n=1/2∑_m Mω_0^2x_m^2a_n-gx_na_n+W_na_n -j(a_n-1+a_n+1). We multiply Eq. (<ref>) by a complex-conjugated amplitude a_n^* and sum on over all sites(here we employnormalisation condition ∑_n| a_n|^2=1). The procedure leads to an expression for he total energy: ℰ=1/2∑_m Mω_0x_m^2-∑_n gx_n | a_n|^2 +∑_n W_n | a_n|^2-∑_n j(a_n+1+a_n-1)a_n^*. A further differentiation over a given atomic site position, x_n,neglects the nearest-neighbour electronic terms:∂ℰ/∂ x_n=Mω_0^2x_n-(g-W_n^') | a_n|^2,and near the equilibrium point leadsto an important analytical relation expressing dependency ofthe electronic andlattice degrees of freedom:𝒳_n=(g-W_n^')/Mω_0^2 | 𝔞_n|^2,where 𝒳_n is the atomic position and 𝔞_n is the solution of Eq. (<ref>) for the minimum energy ℰ. Substituting Eq. (<ref>) into Eq. (<ref>), we obtain an electronic discrete Schrödinger-type equation:ℰ𝔞_n=1/2∑_m Mω_0^2𝒳_m^2𝔞_n-g(g-W_n^')/Mω_0^2 |𝔞_n |^2𝔞_n+W_n𝔞_n -j(𝔞_n-1+𝔞_n+1).After introducing the convenient substitution:ε =-ℰ+1/2∑ Mω_0^2𝒳_n^2-2j, Eq. (<ref>) takes the following form:j(𝔞_n-1-2𝔞_n+𝔞_n+1)+g(g-W_n^')/Mω_0^2 | 𝔞_n|^2𝔞_n-(ε +W_n) 𝔞_n=0. In the continuum limit, 𝔞_n is assumed to be adifferentiable function of the continuous position variable n:𝔞_n ± 1=𝔞_n±∂𝔞_n/∂ n+1/2∂^2 𝔞_n/∂ n^2. In the case of the strongly localised wave function(W_n=0 as β→ 0), theapproach turns Eq. (<ref>)into the so-called classical continuous nonlinear Schrödinger equation (CNLSE) <cit.>:j∂^2 𝔞_n/∂ n^2+g^2/Mω_0^2 | 𝔞_n|^2𝔞_n-ε𝔞_n=0. Interestingly, the first term in Eq. (<ref>) can be generalisedfor the case of thehigher order overlap integrals. We found that, in the continuumlimit,for the case of hopping to the arbitrary δ-th nearest neighbour: j_δ𝔞_n+δ+j_δ𝔞_n-δ=j_δ(2𝔞_n+δ ^2∂^2 𝔞_n/∂ n^2). It is easy to prove that in the case of the first-nearest neighbour (δ=1), this relation converges to Eq. (<ref>): j𝔞_n+1 +j𝔞_n-1=j(2𝔞_n+∂^2 𝔞_n/∂ n^2). Thus, accounting forδ-nearest neighbours,we finallyreformulatethe problem in terms of the extended CNLSE with variable coefficients:∑_δj_δδ ^2∂^2 𝔞_n/∂ n^2+g(g-W_n^')/Mω_0^2 | 𝔞_n|^2𝔞_n-(ε +W_n) 𝔞_n=0where we have disregarded the functional dependence of the functions for the sake of simplicity. Finally, the extended time-dependentCNLSE with variable coefficients already discussed in the main text of the manuscript is obtained after adding the time-dependent derivative: iħ∂𝔞_n/∂ t+∑_δj_δδ ^2∂^2 𝔞_n/∂ n^2+g(g-W_n^')/Mω_0^2 | 𝔞_n|^2𝔞_n-(ε +W_n)𝔞_n=0. Supplementary Note S2: Exact solutions of the one-dimensional extended time-dependent nonlinear Schrödinger equation by using the G'/G expansion method.The G'/G-expansion method was first introduced in Ref. <cit.> and it is extensively used to search for several exact solutions of time-dependent nonlinear equations <cit.>. The method is based in linearising the solution in the travelling wave ansatz. Mathematically, the current method maps the nonlinear equation into a second order differential equation with constant coefficients and the problem is reduced to a simple algebraic computation. Further details about the method can be found in Ref. <cit.>.Let us then start by considering the one-dimensional extended time-dependent nonlinear Schrödinger equation (NLS) with constant coefficientsi ∂ϕ(x,t)/∂ t + R ∂^2ϕ(x,t)/∂ x^2+S |ϕ(x,t)|^2ϕ(x,t) - T ϕ(x,t) =0where R, S and T are real constant coefficients and ϕ(x,t) represents the electron wave function at the position x and time t. As the NLS equation is complex, we look for a solution factorised as:ϕ(x,t)=A𝒰(x,t) e^i[(A^2-k^2)t+kx]where A is the amplitude of the wave function, k represents the wave vector and 𝒰(x,t) is a complex function. Now inserting Eq. (<ref>) in Eq. (<ref>), taking the appropriate derivatives of ϕ and after some algebra, Eq. (<ref>) can be recast in the form:i ∂𝒰(x,t)/∂ t + R (∂^2 𝒰(x,t)/∂ x^2+2 ∂𝒰(x,t)/∂ x ik) + S𝒰^3(x,t) +(k^2-T-A^2-R k^2) 𝒰(x,t)= 0Now, by using the travelling wave ansatz, we define ξ=x/R-2kt so that 𝒰(x,t)=𝒰(ξ). Consequently, Eq. (<ref>) can be written in terms of the new variable ξ as:1/Rd^2𝒰(ξ)/dξ^2+S 𝒰^3(ξ)+(k^2-T-A^2-R k^2) 𝒰(ξ)=0finally, the equation can be simplified as:d^2𝒰(ξ)/dξ^2+R S 𝒰^3(ξ)+ν 𝒰(ξ)=0where ν=Rk^2-RT-RA^2-R^2 k^2. Applying now the G'/G-expansion method to Eq. (<ref>), we first have to consider the homogeneous balance between the highest order nonlinear term and the highest order derivative of 𝒰(ξ) in Eq. (<ref>), so that, here n=3-2=1. In the G'/G-expansion method, the travelling wave solution𝒰(ξ) can be linearised and expressed by a polynomial in powers of G'/G as:𝒰(ξ)=∑_i=0^n a_i(G'(ξ)/G(ξ))^i(n=1)=a_0+a_1 χ(ξ)Here we use the notation χ(ξ)=G'(ξ)/G(ξ) and G(ξ) satisfies the second order ordinary differential equation:G”(ξ)+λ G'(ξ) + μ G(ξ) =0where a_0, a_1, λ and μ are constants. After calculating the first and second derivatives of 𝒰 with respect to ξ and by inserting Eq. (<ref>) in the derivatives, the second derivative of𝒰 turns into:d^2𝒰(ξ)/dξ^2=a_1(λμ+(λ^2+2μ)χ(ξ)+3λχ^2(ξ)+2χ^3(ξ))By using Eq. (<ref>) and Eq. (<ref>) in Eq. (<ref>), we obtain the following polynomial sorted in terms of χ :a_1λμ + R S a_0^3+ν a_0+(a_i(λ^2+2μ)+3RSa_0^2a_1+ν a_1)χ +(3 a_1λ +3RS a_0 a_1^2)χ^2+(2a_1+RS a_1^3)χ^3 =0 Equation (<ref>) is fulfilled when the coefficients of the polynomial are taken to zero, thus we end up with a system of 4 equations:a_1λμ+RS a_0^3+ν a_0 =0 a_1(λ^2 +2μ)+3RSa_0^2a_1+ν a_1 =0 3a_1λ+3RSa_0a_1^2=0 2a_1+RS a_1^3=0The corresponding set of solutions are:Solution 1) Trivial solution:a_1=0=a_0 or a_1=0for any a_0 constantAll remaining solutions are non-trivial with a_1≠0.Solution 2) a_1=± i√(2/RS);a_0=-6^1/3ν+(∓ 9 iλμ+√((6ν^3-81λ^2μ^2)))^2/3/2^1/63^2/3√(RS)(∓ 9 iλμ+√((6ν^3-81λ^2μ^2)))^1/3 Solution 3) a_1=± i√(2/RS);a_0=-[3]24ν+(-2)^2/3(∓ 9 iλμ+√((6ν^3-81λ^2μ^2)))^2/3/2^5/63^2/3√(RS)(∓ 9 iλμ+√((6ν^3-81λ^2μ^2)))^1/3 Solution 4) a_1=± i√(2/RS);a_0=-[3]6ν+(∓ 9 iλμ+√((6ν^3-81λ^2μ^2)))^2/3/2^1/63^2/3√(RS)(∓ 9 iλμ+√((6ν^3-81λ^2μ^2)))^1/3 Solution 5) a_1=± i√(2/RS);a_0=±√(-ν-λ^2-2μ/3RS) Solution 6) a_1=± i√(2/RS);a_0=± iλ/√(2RS)The general solution of Eq. (<ref>) is:a) Case 1: Self-focusing solution (λ^2-4μ>0)χ(ξ)=√(λ^2-4μ)/2(c_1 cosh(√(λ^2-4μ)/2ξ)+c_2 sinh(√(λ^2-4μ)/2ξ)/c_2 cosh(√(λ^2-4μ)/2ξ)+c_1 sinh(√(λ^2-4μ)/2ξ)) b) Case 2: Periodic solution (λ^2-4μ<0)χ(ξ)=√(4μ-λ^2)/2(c_1 cos(√(4μ-λ^2)/2ξ)-c_2 sin(√(4μ-λ^2)/2ξ)/c_2 cos(√(4μ-λ^2)/2ξ)-c_1 sin(√(4μ-λ^2)/2ξ))where c_1 and c_2 are arbitrary constants. Finally, the family of solutions for the one-dimensional extended time-dependent NLS equation (see Eq. (<ref>)) are obtained after substituting Eqs. (<ref>)-(<ref>) in Eq. (<ref>) with the constants a_0 and a_1 already calculated in Eqs. (<ref>)-(<ref>). Some representative solutions of Eq. (<ref>) are plotted in Supplementary Figs. <ref>-<ref> for a different set of parameters. Thus, in Supplementary Figs. <ref>-<ref> we plot the solitonic solutions for λ^2-4μ>0. The profile of the solution depends on the numerical value of the parameters but for the whole set of solutions, they can be classified in three different groups, i.e., kink, anti-kink and kink–anti-kind pairs. Notice the different solitonic profile shown in Supplementary Fig. <ref> with respect to Supplementary Fig. <ref>c). In the case of λ^2-4μ<0, we basically obtain the harmonic or periodic solutions. Particularly, in Supplementary Figs. <ref>b)-<ref>b), it can be appreciated the difference of both periodic profiles.Supplementary Note S3: On the periodic solutions of the one dimensional polaronic model.In this section, we describe derivations on theperiodic solutions in more detailed. As it has been shown in Supplementary Note S1, the initial electron-lattice Hamiltonian in the absence of the nonlocal term can be mapped into the continuous NLSE: j∂^2 𝔞_n/∂ n^2+g^2/Mω_0^2 | 𝔞_n|^2𝔞_n-ε𝔞_n=0. Hereby it is convenient to introduce the following notation: 𝔣^2=g^2/ε Mω_0^2 | 𝔞_n|^2 and n=(j/ε)^1/2n', which leads to:𝔣^”_n'n'+𝔣^3-𝔣=0. Periodic solutions of Eq. (<ref>) are sought in the form of Jacobi elliptic functions ζ_0[ζ n,𝔪] and ζ_0[ζ n,𝔪],where ζ_0 and ζ are coefficients that can be expressed as a function of 𝔪, the square of the elliptic function modulus. After some algebra we obtain:𝔣^(cn) = ( 2𝔪/ | 2𝔪-1| )^1/2[ 1/ | 2𝔪-1|^1/2n^',𝔪 ], 𝔣^(dn) = ( 2/2-𝔪 )^1/2[ 1/ ( 2-𝔪 )^1/2n^',𝔪 ]. By using the notation σ=g^2/4Mω_0^2j, we can rewrite the periodic cnoidal and a previously not discussed, dnoidalsolutions of Eq. (<ref>) as:𝔪^1/2ζ^(cn)/(2σ)^1/2[ ζ^(cn)n,𝔪 ]; ζ^(cn)=(ε^(cn)/j)^1/21/|2𝔪-1|^1/2 ζ^(dn)/(2σ)^1/2[ ζ^(dn)n,𝔪 ]; ζ^(dn)=(ε^(dn)/j)^1/21/(2-𝔪)^1/2. The normalisation condition of 𝔞_n in case of 𝔑-well solutions will lead to: 𝔑/σ𝔪ζ^(cn)∫_0^K^2 [ζ^(cn)n,𝔪 ]dn=𝔑/σζ^(cn)(E-𝔪'K)= 1 𝔑/σζ^(dn)∫_0^K^2[ ζ^(dn)n,𝔪 ] dn=𝔑/σζ^(dn)E= 1,where K is the complete elliptic integral of the first kind, E is the complete elliptic integral of the second kind and 𝔪' is the complementaryto 𝔪 parameter <cit.>.After some algebra, the relations shown in Eq. (<ref>) lead us to following expressions for the energy of the localised electron represented by the parameterε already introduced in Supplementary Note S1: ε^(cn) =j ( σ/𝔑 )^22𝔪-1/(E-𝔪'K)^2,ε^(dn) =j ( σ/𝔑 )^22-𝔪/E^2. The length of the chain 2η and number of the wells along the chain 𝔑are related as2ηζ=𝔑K. Considering this relation we find it convenient topresent the energy of thelocalised electron in the following form: ε^(cn) =( g^2/4Mω_0^2 )( K/2η )2𝔪-1/E-𝔪'K,ε^(dn) = ( g^2/4Mω_0^2 )( K/2η ) 2-𝔪/E.Supplementary Note S4: On the modulation instability of the periodic solutions.Investigatingthe stability problemis the key method to clarify the system behaviour in case of non-degenerate tree of solutions. We focus on the cnoidal and the dnoidal solutions as the most probable candidates describing the behaviour of the multi-polaron chain (see Supplementary Note S3) and examine their modulation instability <cit.> against small perturbations.We start considering the time dependent CNLSE (Eq. (<ref>)) in the form: iħ∂𝔞_n/∂ t+j ∂^2 𝔞_n/∂ n^2+g^2/Mω_0^2 | 𝔞_n|^2𝔞_n-𝒲𝔞_n=0.where 𝒲 plays the role of an external potential which we assume to be constant in order to study the perturbation near the manifold of the analytically obtained periodic solutions(it is easy to prove that the conditional relation for the flat profile of the non-local termW_ninEq. (<ref>) states asV_p<<γ_n and 2η>β a). Hereby we also find convenient to introduce the following substitutions: ϕ^2=g^2/j Mω_0^2 | 𝔞_n|^2 and t=ħ/jτ. That leads to the following equation: iϕ^'_τ+ϕ^”_nn+ | ϕ |^2ϕ -𝒲/jϕ =0 In order to perform the study, we use the following ansatzin the form of the travelling wave function (Supplementary Note S1): ϕ (ξ,τ)=A(𝔣(ξ)+ϕ_1(ξ,τ)+iϕ_2(ξ,τ))e^i(A^2-k^2)τ+ikxwhere we redefine ξ=A(n-2kτ). In this notation 𝔣(ξ) represents the stationary part of the solution; ϕ_1(ξ,τ) and iϕ_2(ξ,τ) are assumed to be the lower-order terms with respect to the unperturbed solution. They play the role of small perturbations in the system. Substituting Eq. (<ref>) into Eq. (<ref>), leads us to the following system of equations:{𝔣^”_ξξ(ξ)+𝔣^3(ξ)-ϖ𝔣(ξ)=0∂ϕ_1(ξ,τ)/∂τ=A^2(ϖ-𝔣^2(ξ )-∂^2/∂ξ^2 )ϕ_2(ξ,τ)=A^2 𝒪̂_1ϕ_2(ξ,τ) i∂ϕ_2(ξ,τ)/∂τ=-iA^2(ϖ-3𝔣^2(ξ )-∂^2/∂ξ^2 )ϕ_1(ξ,τ)=-iA^2 𝒪̂_2ϕ_1(ξ,τ). .Here we find it convenient to introduce the dimensionless parameter ϖ=1+𝒲/jA^2. Moreover, 𝒪̂_1 and 𝒪̂_2 are theoperators acting in the real and complex space, respectively. After some simple algebra, we obtain the following relation: ∂^2 ϕ_1(ξ,τ)/∂τ^2=-A^4𝒪̂_1𝒪̂_2ϕ_1(ξ,τ).Further, we introduce time factorisation of theϕ_1(ξ,t) in the form: ϕ_1(ξ,τ)=ϕ_1(ξ)e^A^2θτ,where the θ parameter is so called the instabilityincrement. Substitutingthe factorised function into Eq. (<ref>) leads us to following relation: 𝒪̂_1𝒪̂_2ϕ_1(ξ,τ)=-θ^2 ϕ_1(ξ)We substitute small, periodic perturbation in the form of the Bloch-Floquet set: ϕ_1(ξ)=∑_q𝔣(ξ)e^iqξ. Fourier series expansion of 𝔣(ξ) atthe given q,leads us to ϕ_1^(q)=∑_nC_ne^inq_0ξe^iqξ= ∑_nC_ne^iq_nξ which wesubstituteintoEq. (<ref>). C_n are constant coefficients. Furthermoremultiplying the obtainedrelation by e^-iq_mξand integrating over l which stands for the period of𝔣(ξ) function, we end up with: 1/ł∑_nC_n∫_0^łe^-iq_mξ𝒪̂_1𝒪̂_2e^iq_nξdξ=∑_nΘ_mnC_n ≡ -θ^2 C_m. Thus, the analysis of the system stability is being reformulated in terms of the Θ_mn matrix eigenvalue problem. It is easy to see that-θ^2 ∈ℝ^+ is the condition for the system to be stabilized with respect to the small perturbation, butifotherwise, -θ^2 ∈ℝ^- or -θ^2 ∈ℂ, the system instability exponentially diverges with time. Replacing the operators in Eq. (<ref>) by using Eq. (<ref>), we finally find the matrix Θ_mn. The relation is very similar to the one obtained previously in studies ofthe nonlinear waves in plasma physics <cit.>: Θ_mn=(ϖ +q_n^2)^2δ_mn +31/ł∫_0^ł𝔣(ξ )^4cos(q_n-q_m)dξ-1/ł∫_0^ł(4ϖ+3q_m^2+q_n^2)𝔣(ξ )^2cos(q_n-q_m)dξ In order to solve eigenvalue problem forΘ_mn, we substitute the periodicsolutions obtained in Supplementary Note S3 into Eq. (<ref>). We find it convenient to introduce q_n=nq_0+q=q_0(n+Q), where Q and n are numerical parameters (n is an integer) and normalise integralswith respect to l. Then for the cnoidal solution we find the following relation:Θ_mn^(cn)=ϖ^2 {(1 +(π(Q+n)/2K(2𝔪-1)^1/2)^2)^2δ_mn+3(2𝔪/2𝔪-1)^2∫_0^1[ 4K ξ,𝔪 ] ^4cos[ 2π (n-m)ξ ] dξ- (2𝔪/2𝔪-1)∫_0^1(4+3(π(Q+m)/2K(2𝔪-1)^1/2)^2+(π(Q+n)/2K(2𝔪-1)^1/2)^2)[ 4Kξ,𝔪 ] ^2cos[ 2π (n-m)ξ ] dξ},and for the dnoidal: Θ_mn^(dn)=ϖ^2{(1+(π(Q+n)/K(2-𝔪)^1/2)^2)^2δ_mn+3(2/2-𝔪)^2∫_0^1[ 2Kξ,𝔪 ] ^4cos[ 2π (n-m)ξ ] dξ- (2/2-𝔪)∫_0^1(4+3(π(Q+m)/K(2-𝔪)^1/2)^2+(π(Q+n)/K(2-𝔪)^1/2)^2)[ 2Kξ,𝔪 ]^2 cos[ 2π (n-m)ξ ] dξ}. Supplementary Note S5: Dispersion law for -θ^2 parameter in the small and large𝔪 limit. Supplementary Note S6: Time evolution of two localised Gaussian perturbations described by the extended time-dependent continuous NLSE. Supplementary Note S7: Derivative of the non-local term W_n.Supplementary Note S8: Asymmetry of the nonlocal solution for a minute change of the potential.Supplementary Note S9: Time evolution of two localised Gaussian perturbations for two different strengths of the Pöschl-Teller potential.Supplementary Note S10: Time evolution of two localised Gaussian perturbations placed at different sites.Supplementary Note S11: Internal modes in the extended time-dependent continuous NLSE.In this section, we will demonstrate that the extended time-depend continuous NLSE as described in Eq. (<ref>) admits internal modes. The demonstration is based partially in Ref. <cit.>. In order to determine the internal mode, we first analyse small linear perturbations upon the soliton solution, ϕ_0(x,ω), of Eq. (<ref>). Then, we linearise theextended time-dependent continuous NLSE around the soliton solution applying the following ansatz:ϕ(x,t)=(ϕ_0+(Y(x,ω,Ω)+Z(x,ω,Ω)) e^-iΩ t+(Y^*(x,ω,Ω)-Z^*(x,ω,Ω))e^iΩ t) e^iω twhere Y and Z are complex functions, Ω is an eigenvalue and ω represents the frequency induced to the frequency of the fundamental wave. The asterisk stands for the complex conjugation. Substituting Eq. (<ref>) in Eq. (<ref>), neglecting nonlinear terms and also taking into account that the soliton solution ϕ_0(x,ω) satisfies the extended time-dependent continuous NLSE, the problem described in Eq. (<ref>) can be reduced to the following linear eigenvalue problem[0 ℒ̂_0; ℒ̂_10 ][ Y; Z ] =Ω[ Y; Z ]whereℒ̂_0=-R∂^2/∂ x^2+ω -2 S |ϕ_0|^2+Sϕ_0^2+T and ℒ̂_1=-R∂^2/∂ x^2+ω -2 S |ϕ_0|^2-Sϕ_0^2+T. By using the ansatz ϕ=ϕ̃ e^iω t, Eq. (<ref>) can be recast in the form:R∂^2ϕ̃/∂ x^2-ωϕ̃ + (S|ϕ̃|^2-T)ϕ̃=0where ϕ̃ represents a general localised solution. Now, assuming the case with T≪ 1 in Eq. (<ref>), we can perturbately expand the soliton solution as:ϕ_0=φ_0+Tφ_1with φ_0, φ_1 ∈ℝ.Performing the expansion given by Eq. (<ref>) in the operators ℒ̂_0 and ℒ̂_1 and neglecting terms in T^2 yieldsℒ̂_0 = -R∂^2/∂ x^2+ω-S φ_0^2-2S T φ_0φ_1+T=ℒ̂_0^0+Tℒ̂_0^1 ℒ̂_1 = -R∂^2/∂ x^2+ω-3S φ_0^2-6ST φ_0φ_1+T=ℒ̂_1^0+Tℒ̂_1^1where φ_0 satisfies Eq. (<ref>) with T=0 or equivalently, the equation ℒ̂_0^0 φ_0=0 while φ_1 is governed by the linear inhomogeneous equationℒ̂_1^0φ_1=-Tφ_0As already indicated in Refs. <cit.>, the internal mode is a localised solution of the perturbed eigenvalue problem[0 ℒ̂_0^0+Tℒ̂_0^1; ℒ̂_1^0+Tℒ̂_1^10 ]Φ_in=(ω-T^2κ^2) Φ_inwith κ, a real parameter and the solution of the eigenvalue problem in Eq. (<ref>) is given by the combination of the ψ^± functions asΦ_in(x)=∫ dκ [f^+(κ) ψ^+(x,κ)+f^-(κ) ψ^-(x,κ)]where the f^+, f^- coefficients are given by orthogonality relations (see Ref. <cit.>) and the functions ψ^± =[ ψ^±_1; ψ^±_2 ] are eigenvectors of the following eigenvalue problem[0 ℒ̂_0^0; ℒ̂_1^00 ]ψ^±=± (ω+κ^2) ψ^±As already reported in Ref. <cit.>, the condition for the existence of the internal mode is that κ>0, where κ is calculated asκ=-1/4∫ dx [ψ_1^+(-x,0) ℒ̂_1^1ψ_1^+(x,0)+ψ_2^+(-x,0) ℒ̂_0^1ψ_2^+(x,0)] For a symmetric solution ψ^± and the conditions2S φ_0 φ_1 >1 6Sφ_0φ_1 > 1 the parameter κ becomes positive and then, the extended time-dependent NLSE provided by Eq. (<ref>) admits internal modes. 1 Holstein T. Holstein,Studies of polaron motion. Part I. The molecular-crystal model, Ann. Phys. 8, 325-342 (1959); ibid. 8, 343-389 (1959). teller S. Cruz y Cruz, S. Kuru and J. Negro,Classical motion and coherent states for Pöschl-Teller potentials, Phys. Lett. A 372, 1391-1405 (2008). G. Pöschl and E. Teller,Bemerkungen zur Quantenmechanik des Anharmonischen Oszillators, Z. Phys. 83, 143-151 (1933).Turkevich L. A.Turkevich, T. D. Holstein, Small-oscillation theory of the one-dimensional large optic polaron,Phys. Rev. B 35, 7474-7484 (1987). Kopidakis G. Kopidakis, C. M. Soukoulis and E. N. Economou,Electron-phonon interaction, localization, and polaron formation in one-dimensional systems, Phys. Rev. B 51, 15038-15052 (1995).wang M. Wang, X. Li,and J. Zhang, The (G'/G)-expansion method and travelling wave solutions of nonlinear evolution equations in mathematical physics, Phys. Lett. A 372, 417-423 (2008). Kudryashov N. A. Kudryashov A note on the (G'/G)-expansion method, Appl. Math. Comput. 217, 1755-1758 (2010).li X. Li and M. Wang, The (G'/G)-expansion method and travelling wave solutions for a higher-order nonlinear schrödinger equation, Appl. Math. Comput. 208, 440-445 (2009).Abramowitz M. Abramowitz, A. I. Stegun, Handbook of mathematical functions with formulas, graphs, and mathematical tables, In Dover Publications, Inc 1965 New York, NY:Dover Publications, Inc (Reprinted by Dover Publications, Inc., 1992)Zakharov V.E. Zakharov, L.A. Ostrovsky, Modulation instability: The beginning, Physica D 238, 540-548 (2009).Pavlenko V. Pavlenko and V. Petviashvili,Stability of the nonlinear periodic waves in plasma, Journal de Physique Colloques 40 (C7), 621-622 (1979).pelinovsky D. E. Pelinovsky, Y. S. Kivshar and V. V. Afanasjev, Internal modes of envelope solitons, Physica D 116, 121-142 (1998).doktorov E. V. Doktorov, Internal vibrations of nonlocal nonlinear Schrödinger soliton, Phys. Lett. A 374, 247-251 (2009).
http://arxiv.org/abs/1706.09273v1
{ "authors": [ "N. Bondarenko", "O. Eriksson", "N. V. Skorodumova", "M. Pereiro" ], "categories": [ "cond-mat.stat-mech", "cond-mat.other", "math-ph", "math.MP", "nlin.PS" ], "primary_category": "cond-mat.stat-mech", "published": "20170626185121", "title": "Multi-polaron solutions, nonlocal effects and internal modes in a nonlinear chain" }
Institute of Physics, Johannes Gutenberg University, 55128 Mainz, Germany Faculty of Engineering and Science, Western Norway University of Applied Sciences, No-6803 Førde, Norway Institute of Physics, Johannes Gutenberg University, 55128 Mainz, Germany A full description of a magnetic sample includes a correct treatment of the boundary conditions (BCs). This is in particular important in thin film systems, where even bulk properties might be modified by the properties of the boundary of the sample. We study generic ferromagnets with broken spatial inversion symmetry and derive the general micromagnetic BCs of a system with Dzyaloshinskii-Moriya interaction (DMI).We demonstrate that the BCs require the full tensorial structure of the third-rank DMI tensor and not just the antisymmetric part, which is usually taken into account. Specifically, we study systems with C_∞ v symmetry and explore the consequences of the DMI. Interestingly, we find that the DMI already in the simplest case of a ferromagnetic thin-film leads to a purely boundary-driven magnetic twist state at the edges of the sample. The twist state represents a new type of DMI-induced spin structure, which is completely independent of the internal DMI field. We estimate the size of the texture-induced magnetoresistance effect being in the range of that of domain walls. New Boundary-Driven Twist States in Systems with Broken Spatial Inversion Symmetry Karin Everschor-Sitte December 30, 2023 ================================================================================== Over the past few years, there has been an increasing interest in magnets where interface-induced phenomena play a major role <cit.>. This includes the topics of magnetic heterostructures as well as thin films, where the main effects arise from the sample's boundary. Therefore, a rigorous understanding ofthe physical boundary conditions (BCs) is needed.The strong spin-orbit coupling (SOC) and broken spatial inversion symmetry of these nanostructures lead to an intricate interplay between spin, charge, and orbital degrees of freedom, which affect the magnetic equilibrium state as well as the current-driven spin phenomena. Important examples of the SOC effects include current-driven spin-orbit torques <cit.>,charge-pumping via magnetization precession <cit.>, and the formation of topologically nontrivial skyrmion textures <cit.> and chiral domain walls <cit.>, as well as the multiferroic behaviour of chiral magnets <cit.> and the ferroelectricity of magnetic textures <cit.>.The underlying mechanism being responsible for chiral skyrmions and domain walls is the Dzyaloshinskii-Moriya interaction (DMI) <cit.>. The DMI is a relativistic magnetic exchange interaction that originates from broken spatial inversion symmetry. Phenomenologically, the DMI is modeled by a free-energy density term, which is linear in the spatial variations of the magnetization. In its most general form,the term can be written as ℱ_D= D_ijkm_i∂_j m_k.as discussed for example explicitly in Landau & Lifshitz, Ref. <cit.>.Here, m is a unit vector pointing along the magnetization M= M_sm, and D_ijk is the DMI tensor, which is linear in the relativistic interactions.The particular form of the DMI tensor is determined by the point group of the system. Here, and in what follows,we use the convention of a summation of repeated indices. To avoid confusion with the frequently used terminology of denoting Bloch (Néel) DMI as bulk (surface)-induced DMI, we will denote the bulk part of the sample as “internal” and the surface as “boundary”. In the present work, we investigate how the DMI affects the magnetic equilibrium state at the boundary of the sample. Contrary to the internal DMI field, which only depends on the antisymmetric part ∼ D_ijk - D_kji of the DMI tensor and which was discussed already in previous works <cit.>, we find that a correct treatment of the micromagnetic BCs requires the full tensorial structure of the DMI.Our general boundary conditions comprise novel spin phenomena, such as purely boundary-driven twist states along the high symmetry axis, which we discuss below. Because boundary effects can dominate the physics of thin samples <cit.>, we expect our results to be important for thin films in which the new BCsmight even excel the effects of the internal DMI field. As an important example we show the result for C_∞ v systems due to the following reasons i) they describe well the physics of ferromagnetic heterostructures such as polycrystalline Pt/Co/AlO_x systems<cit.> and ii) to show that even in high symmetry class systems, where many DMI tensor elements vanish, the symmetric DMI does lead to an effect. Importantly we find that three independent tensor elements determine the BCs in C_∞ v systems, whereas only a single parameter is required to model the internal DMI field.The consequences of the novel BCs become apparent already in a very simple example of a thin-film ferromagnet with C_∞ v symmetry subject to an out-of-plane magnetic field. Here, the DMI-induced BCs lead to a non-negligible twist state at the edges. We give an estimate for the twist-state induced magnetoresistance showing that it could be observed via standard magnetoresistance measurements. The boundary-driven spin phenomena become of particular importance in magnetic nanostructures, in which the boundary effects strongly influence the internal magnetic structure of the system.Implementing the full DMI-induced BCs will therefore be of crucial importance for a correct micromagnetic modeling of magnetic nanostructures,which represents an essential tool for exploring future spintronic devices.We consider a ferromagnet with broken spatial inversion symmetry and SOC, which covers a finite region. The magnetic system is assumed to be far below the Curie temperature so that longitudinal variations of the magnetization can be disregarded. In this case, the localmagnetization M (r)= M_sm (r) is fully determined by the unit vector m (r), which represents the local direction of the magnetization.Phenomenologically, the magnetic system is determined by the free energy functional, which up to second order in the magnetization gradients is given by <cit.>F[ m ]= ∫ dr[ℱ_ e + ℱ_ D + ℱ_ h + ℱ_ a].Here, ℱ_ e is the symmetric magnetic exchange interaction given by ℱ_ e=J_ij∂_i m·∂_j m, where J_ij is a symmetric positive definite matrix parameterizing the spin stiffness. ℱ_ h= -M_s m·H_h describes the coupling to an external magnetic field H_h, and ℱ_ a represents the anisotropy energy including dipolar interactions. The tensorial forms of the exchange and DMI are determined by the symmetry relations <cit.>J_ij = ℛ^(α)_ilℛ^(α)_jm J_lm , D_ijk =ℛ^(α)_ilℛ^(α)_jmℛ^(α)_kn D_lmn ,where {ℛ^(α) |  α = 1, 2, ...}are the generators of the system's point group.Note that the tensor coefficients D_ijk vanish for symmetry groups containing the inversion operator ℛ_ij = -δ_ij, as DMI only exists in systems with spatially asymmetric SOC.In the literature, also different notations are used for the DMI, which we summarize and relate to our notation in the Supplemental Material.The equilibrium state of the magnetic system is found by a variational minimization of the free energy functional(<ref>) with respect to small variations δm (r) of the local magnetization direction.Due to the normalization m·m= 1, the variation is constrained by the condition δm (r)·m (r) = 0. Consequently, the local perturbationcan be written as δm (r) = m (r) ×δφ (r), where δφ∈ℝ^3 with |δφ| ≪ 1. The equilibrium condition is determined by δ F[ m (r) ] / δφ (r)= 0 resulting in0= m×[ 2 J_ij∂_i∂_jm+ H_D + H_a+ H_h], 0 =m×[ 2 J_ijn_i∂_jm+ Γ_D],one equation for the inner part (Eq. (<ref>)) and one for the boundary of the sample (Eq. (<ref>)).The latter equation originates from the partitial derivatives in the free energy functional, which lead to surface integrals when the functional is varied with respect to m. Consequently, the magnetic anisotropy and external magnetic field do not contribute to Eq. (<ref>), but only enter the equation for the inner part of the sample via the effective fields H_h and (H_a)_k= -∂ℱ_ a/∂ m_k, respectively. In contrast, the exchange interaction and the DMI enter both equations. In particular, the DMI enters via the internal DMI field H_D and the boundary-induced DMI field Γ_D(H_D)_k= (D_ijk - D_kji) ∂_jm_i,(Γ_D)_k= m_i n_j D_ijk.Here, n is the outer surface normal of the boundary. Eq. (<ref>) is satisfied when 2 J_ijn_i∂_jm cancels the components of Γ_D that are perpendicular to m. This leads to our main result of this paper, namely, the general condition for the magnetization at the boundary of the sample2 J_ijn_i∂_jm= - m× ( Γ_D×m ) . Note that for isotropic spin stiffness J_ij= Jδ_ij and without DMI, Eq. (<ref>) reduces to the standard Neumann BC n_i ∂_i m =0, which forces the magnetization to haveno spatial gradients across the boundary.Remarkably, the internal DMI field H_D in Eq. (<ref>) is only determined by the antisymmetric part D_ijk - D_kji of the DMI tensor, whereas Γ_D depends on the full tensorial structure of the DMI. This means that the BCs are in general parametrized by more DMI parameters than the internal field. Below, we explicitly show this for C_∞ v systems, and demonstrate that the DMI can produce purely boundary-induced spin textures that are independent of the internal DMI field. Systems with C_∞ v symmetry are invariant under any proper or improper rotation about the high symmetry axis, which we in the present work assume to be along the z direction. The C_∞ v symmetryimplies that the spin-stiffness matrix is determined by two independent tensor coefficients, J_xx = J_yy andJ_zz, whereas four independent tensor coefficients govern the DMI tensor:D_xxz = D_yyz,D_xzx = D_yzy, D_zxx = D_zyy,and D_zzz.The remaining tensor coefficients vanish by symmetry. It is common to parameterize these tensor coefficients by the antisymmetrized and symmetrized elements:D^A_1≡ (D_zxx-D_xxz)/2, D^S_2≡ D_xzx, D^S_1≡ (D_zxx+D_xxz)/2, D^S_3≡ D_zzz. Here, superscript S (A) labels the parts of D_ijk that are symmetric (antisymmetric) with respect to the magnetic indices i and k.Hence, the DMI free energy density ℱ_D, the internal DMI field H_D, and the boundary-induced DMI field Γ_D are: ℱ_D= D_1^A [ m_z (∇·m ) - m·∇ m_z ] + (D_23 - 2 D_1^S) m_z ∂_z m_z+ D_1^S ∇· (m_zm) ,H_D= 2 D_1^A [∇ m_z - (∇·m)ẑ]. Γ_D= (D_1^S + D_1^A) m×( n×ẑ) +D_2^Sn_z m + [2 D_1^S (n_xm_x + n_ym_y) + D_23 n_zm_z ] ẑ. Here, we have introduced D_23=D^S_3 -D^S_2, because ℱ_D is governed by only three of the four independent tensor parameters due to |m|=1. Note that only the first term of ℱ_D, representing the standard Néel DMI <cit.>, yields the internal DMI field H_D.The latter is solely controlled by the antisymmetric tensor element D_1^A and agrees with the effective DMI field considered in previous studies of C_∞ v ferromagnets <cit.>. The last two terms of ℱ_D can be rewritten into boundary terms via the divergence theorem. The boundary terms do not contribute to the internal DMI field H_D, but to the BC, which is characterized by all three independent parameters. Projecting out the component of Γ_D that is perpendicular to m yields the following BC for the magnetization: 2J_ijn_i∂_jm=( D_1^S + D_1^A ) m×( ẑ×n) -[ 2 D_1^S (n_xm_x + n_ym_y) +D_23n_zm_z] m×( ẑ×m).Usually, the DMI in C_∞ v ferromagnets is parameterized only by a single parameter (D_1^A in our notation). While the antisymmetric parameter D_1^A does give the correct internal DMI field [cf. Eq. (<ref>)], this is not the case for the BC in Eq. (<ref>). D_1^A only produces parts of the term ∼m×( n×ẑ) in the surface field Γ_D, whereas the component of Γ_D proportional to m×( ẑ×m) is solely an effect of the tensor coefficients D_1^S and D_23.To conclude, we find that for C_∞ v ferromagnets only one DMI parameter is necessary to capture the physics of the internal field, whereas three independent parameters are required to provide a correct micromagnetic description at the sample edges.We demonstrate below that the terms proportional to the symmetric DMI tensor coefficients in Eq. (<ref>) are not negligible and might lead to magnetic twist states that cannot be phenomenologically described without taking into account the full tensorial form of D_ijk. In particular, we show that there is a simple homogeneous ferromagnetic phase, where the usually exclusively considered antisymmetric part of the DMI vanishes in the BC, but the last term of Eq. (<ref>) induces a twist state. We estimate that the typical decay length of this twist state is about 6, which produces a magnetization gradient and an experimentally observable magnetoresistance effect that is larger than what is typically observed for magnetic domain walls.To study this new twist state, we consider an infinitely-large thin film of thickness 2d with the surface normal parallel to the high symmetry axis, see Fig. <ref>a.The magnet has an easy-plane anisotropy described by the free energy density ℱ_a = K_u m_z^2 where K_u>0. Additionally, it is subjected to an externally applied magnetic field along z, which produces the Zeeman energy ℱ_h = h_z m_z.The DMI is a relativistic effect and thus much smaller than the spin stiffness produced by the Coulomb interaction. In the following, we assume that the DMI is small enough such that the magnetic system is mainly in a ferromagnetic phase with the magnetization tilted by an angle of ψ out of the easy plane, see Fig. <ref>a.The system is translationally invariant in the xy-plane and rotational symmetric about z. Therefore, we can choose a coordinate system in which 𝐦 lies in the xz-plane:m(z) = (cosψ(z), 0, sinψ(z)),and where the magnetization is fully determined by the tilt angle ψ=ψ(z), which still might have a spatial variation along the high symmetry axis. Because we consider an infinite film, the surfaces along the x and y axes can be disregarded and Eq. (<ref>) reduces to2J_zz∂_zm=D_23 m_z[ m×( ẑ×m) ].Substituting Eq. (<ref>) into the free energy and minimizing the resulting energy functional with respect to ψ results in the following equations for the interior and boundary of the sample:∂_z̃^2ψ = K̃_usin (2ψ) - h̃_zcos(ψ),∂_z̃ψ|_z̃=± 1 =D̃sin (2ψ). Here, we have rescaled the z-axis by z̃=z/d, and introduced the dimensionless parameters K̃_u≡K_u d^2 / (2 J_zz), h̃_z≡h_z d^2/ (2 J_zz), and D̃≡D_23 d / (4 J_zz) controlling the strengths of the anisotropy, external field, and boundary DMI, respectively.Note that in our specific example H_D vanishes. Consequently, there are no magnetic textures produced by the equation for the inner part of the sample. The solution of the full boundary value problem (<ref>) is, however, determined by the three dimensionless parameters K̃_u, h̃_z and D̃ and does depend on the boundary DMI-field and the thickness of the film. In general, the thinner the film, the more the system is governed by the BC.In the following, we show that even in the case of larger thicknesses the influence of the BC is not negligible.To treat the problem analytically, we assume in the following that D̃ is a small parameter and solve Eq. (<ref>) to first order in D̃. To this end, we consider a solution of the formψ (z̃) = ψ_0 + D̃ψ_1 (z̃) ,where ψ_0 is the constant solution for D̃ = 0, whereas ψ_1 represents the spatially varying DMI-induced correction to this solution.Substituting this ansatz into Eq. (<ref>), we find that ψ_0 is given by sin (ψ_0)= h_z/( 2 K_u) for |h_z/2K_u| < 1.ψ_1 is determined by the boundary value problem∂_z̃^2ψ_1= κ_1 ψ_1,∂_z̃ψ_1 |_z̃=± 1 =sin (2ψ_0) , where the dimensionless parameter κ_1= κ_1 (ψ_0) ≡ 2K̃_ucos (2ψ_0) + h̃_zsin(ψ_0)depends on the zeroth-order approximation of the tilting and on the thickness of the film, κ_1 ∼ d^2. The analytical solution of the boundary value problem isψ_1 (z̃) =( √(κ_1) )sin (2ψ_0) /√(κ_1)sinh (√(κ_1)z̃).As expected, the modulation along the z direction, ψ_1(z), vanishes for ψ_0 = 0 and |ψ_0|= π/2 as m_zm×( ẑ×m) is zero in both cases.The functional form of D̃ψ_1(z̃) is shown in Fig. <ref> for different values of the film thickness 2d. Here, we have assumed the parameter values J_zz= e-11, K_u = 5.1e5, D_23 = 1.9e-3□, and h_z = √(2) K_u, which are characteristic for ferromagnetic heterostructures <cit.>. In this case, the constant tilting angle induced by the the zeroth-order approximation for the inner part of the sample is ψ_0 = π /4. As the internal magnetic state is unaffected by the BCs in the limit of large film thickness,d→∞, the correction induced by the boundaries of the sample, D̃ψ_1(z̃),vanishes in the inner part of the sample as expected. However, at the boundaries, D̃ψ_1(± 1) does still lead to a correction and approaches the constant value ±D̃sin (2ψ_0) / √(κ_1)≈ 0.19× (π/2) for d→∞, see Fig. <ref>b. Note that the parameter D̃ / √(κ_1)is independent of the thickness, because both D̃ and √(κ_1) are linear in d. The typical decay length of the twist state is d/√(κ_1)∼6, implying that the interface gives a significant correction to the internal spin structure in systems with thicknesses of 12 or less. For such thicknesses, the tilt angle approaches the solution ψ≈ψ_0 + D̃sin(2ψ_0)z̃, which represents a Néel helix (see Fig. <ref>b).The parameter λ = π /|D̃∂_z̃ψ_1| (evaluated at z̃=± 1) represents the strength of the magnetization gradient and corresponds to the width of a magnetic domain wall. For the above parameter values, we find that λ∼7.This yields a magnetization gradient that is comparable to the typical spatial variations of magnetic domain walls (see Ref.  for a review).Since domain walls give observable corrections on the order of 1% to the resistance of ferromagnetic systems <cit.>, we expect this also to be the case for the boundary-driven twist states discovered in this work.The texture-induced resistance has been extensively studied in thecontext of domain walls, where the spatially varying magnetization has been shown to produce a correction to the local resistivity proportional to (∂_i m)^2 in the diffusive regime <cit.>. In addition, there will be corrections from the anisotropic magnetoresistance (AMR) effect <cit.>. This contribution can be disregarded for currents applied perpendicular to the magnetic texture (i.e., along the y-axis in our case), because the AMR only depends on the relative angle between the applied current and the magnetization. Therefore, for currents along the y direction we can phenomenologically model the local resistivity byρ (z̃) = ρ_0 +δρ_ t( ∂_z̃ψ (z̃) )^2 .Here, ρ_0 is the resistance for currents perpendicular to the magnetization in the absence of any twist state,δρ_ t parameterizes the resistivity caused by scattering at the magnetic texture,<cit.> and ( ∂_z̃ψ (z̃) )^2 determines the magnitude of the magnetization gradient (∂_i m)^2.The total conductance of a system with lateral dimensions L_⊥× L_× 2d is found by integrating dG =[L_ d /(L_||ρ (z̃))]dz̃ over the thin film z̃∈ [-1, 1]. Here, L_|| (L_) denote the length of the sample parallel (perpendicular) to the current.By substituting the perturbative solution (<ref>) into the resistivity (<ref>) and assuming |δρ_ t/ρ_0| ≪ 1, one finds to linear order in δρ_ t/ρ_0 the total conductanceG = 1/R_0(1- δρ_ t/ρ_0D̃^2 /2 f(ψ_0)),where R_0 = L_||ρ_0/ 2 d L_. The second term represents the texture-induced correction with f(ψ_0) ≡sin^2 (2ψ_0) ^2 (√(κ_1)) (1+sinh (2√(κ_1)) / 2 √(κ_1) ), where κ_1 is a function of ψ_0 as before.The magnetoresistance can be controlled by changing the tilt angle ψ_0, which experimentally can be manipulated by varying the strength of the applied magnetic field h_z.The ψ_0-dependence of the function f is illustrated in the inset of Fig. <ref>b.The f (ψ_0)-modulation represents a clear signature of the texture-induced magnetoresistance effect,and can be used to distinguish this effect from other resistance phenomena in transport measurements.Based on previous works on domain wall resistance, the twist state is expected to produce a small but observable correction to the total resistance on the order of 1%. To conclude, we have derived the boundary value problem for generic ferromagnets lacking spatial inversion symmetry. We have shown that the BCs require the full tensorial structure of the DMI tensor and not just the antisymmetric part. We have specified the boundary value problem for ordinary systems with C_∞ v symmetry.Moreover, we have given an explicit example of a simple ferromagnetic thin film to demonstrate the importance of the correct BCs we have derived. Here, we have shown that the DMI leads to a purely boundary-driven magnetic twist state at the edges of the sample, which is completely independent of the internal DMI field. We have shown that such a twist state could be observed by conductance measurements upon varying the magnetic field.Overall, already our simple example highlights the importance of the correct treatment of the BCs and that it is notably important for the predictive power of micromagnetic simulations in confined geometries, specifically in those where boundary-induced effects might influence or even dominate the bulk properties. We are grateful to M. Sitte, A. Bogdanov, A. Leonov, M. Garst and J. Sinova for discussions.We further thank M. Sitte for careful reading of the manuscript.We acknowledge the funding from the German Research Foundation (DFG) under the Project No. EV 196/2-1.§ SUPPLEMENTThe most general form of the DMI interaction is given by ℱ_d= D_ijkm_i∂_j m_k,where the form of the DMI tensor is determined by the system's point group. In the literature, however, also different notations are used, which we briefly review here.For example, it is common to symmetrize the DMI tensor with respect to the magnetic indices ℱ_d≡D_(ik)j/2∂_j ( m_i m_k ) + D_[ik]j/2 [ m_i∂_j m_k - m_k∂_j m_i ],where D_(ik)j= (D_ijk + D_kji)/2 andD_[ik]j= (D_ijk - D_kji)/2 represent the symmetric and antisymmetric parts of the tensor, respectively.Other common representations of the DMI term are given byℱ_d ≡D_(ik)j/2∂_j(m_im_k)+𝐃_j· (∂_j 𝐦×𝐦),≡D_(ik)j/2∂_j(m_im_k)+ D_ijℒ_ij (𝐦),where 𝐃_j, D_ij=(𝐃_j)_i, and ℒ_ij=(∂_j 𝐦×𝐦)_i are the Dzyaloshinskii-Moriya (DM) vectors, the spiralization tensor, and the chirality tensor, respectively. Furthermore, the relationship between the DM vectors and the DMI tensor is given by (D_j)_μ = -ϵ_μ i k D_[ik] j/2.Note that upon a spatial integration, the symmetric part of the DMI interaction reduces to a surface term. Therefore, it is common to neglect D_(ik)j and only keep the antisymmetric part – described for example in terms of the DM vectors or the spiralization tensor – which governs the DMI in the internal part of the sample. However, a correct description of the magnetization at the boundaries requires both the symmetric and antisymmetric part.57 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Hellman et al.(2017)Hellman, Hoffmann, Tserkovnyak, Beach, Fullerton, Leighton, MacDonald, Ralph, Arena, D?rr, Fischer, Grollier, Heremans, Jungwirth, Kimel, Koopmans, Krivorotov, May, Petford-Long, Rondinelli, Samarth, Schuller, Slavin, Stiles, Tchernyshyov, Thiaville, andZink]Hellman2016 author author F. Hellman, author A. Hoffmann, author Y. Tserkovnyak, author G. S. Beach, author E. E. Fullerton, author C. Leighton, author A. H. MacDonald, author D. C. Ralph, author D. A. Arena, author H. A. Dürr, author P. Fischer, author J. Grollier, author J. P.Heremans, author T. Jungwirth, author A. V. Kimel, author B. Koopmans, author I. N. Krivorotov, author S. J. May, author A. K. Petford-Long, author J. M. Rondinelli, author N. Samarth, author I. K. Schuller, author A. N. Slavin, author M. D. Stiles, author O. Tchernyshyov, author A. Thiaville,and author B. L. Zink, 10.1103/RevModPhys.89.025006 journal journal Rev. Mod. Phys. volume 89, pages 025006 (year 2017), NoStop [Brataas and Hals(2014)]Brataas2014 author author A. Brataas and author K. M. D. Hals, 10.1038/nnano.2014.8 journal journal Nat. Nanotechnol. volume 9,pages 86 (year 2014)NoStop [Gambardella and Miron(2011)]Gambardella2011 author author P. Gambardella and author I. M. Miron, 10.1098/rsta.2010.0336 journal journal Philos. Trans. A. Math. Phys. Eng. Sci. volume 369, pages 3175 (year 2011)NoStop [Bernevig and Vafek(2005)]Bernevig2005c author author B. A. Bernevig and author O. Vafek, 10.1103/PhysRevB.72.033203 journal journal Phys. Rev. B volume 72,pages 033203 (year 2005)NoStop [Manchon and Zhang(2008)]Manchon2008 author author A. Manchon and author S. Zhang, @noopjournal journal Phys. Rev. B volume 78, pages 212405 (year 2008)NoStop [Chernyshov et al.(2009)Chernyshov, Overby, Liu, Furdyna, Lyanda-Geller, and Rokhinson]Chernyshov2009 author author A. Chernyshov, author M. Overby, author X. Liu, author J. K. Furdyna, author Y. Lyanda-Geller,and author L. P. Rokhinson, 10.1038/nphys1362 journal journal Nat. Phys.volume 5, pages 656 (year 2009)NoStop [Garate and MacDonald(2009)]Garate2009 author author I. Garate and author A. H. MacDonald, 10.1103/PhysRevB.80.134403 journal journal Phys. Rev. B volume 80, pages 134403 (year 2009)NoStop [Hals et al.(2010)Hals, Brataas, and Tserkovnyak]Hals2010 author author K. M. D.Hals, author A. Brataas,and author Y. Tserkovnyak, 10.1209/0295-5075/90/47002 journal journal Europhys. Lett. volume 90, pages 47002 (year 2010)NoStop [Mihai Miron et al.(2010)Mihai Miron, Gaudin, Auffret, Rodmacq, Schuhl, Pizzini, Vogel, Gambardella, Miron, Gaudin, Auffret, Rodmacq, Schuhl, Pizzini, Vogel, andGambardella]Miron2010 author author I. Mihai Miron, author G. Gaudin, author S. Auffret, author B. Rodmacq, author A. Schuhl, author S. Pizzini, author J. Vogel, author P. Gambardella, author I. M.Miron, author G. Gaudin, author S. Auffret, author B. Rodmacq, author A. Schuhl, author S. Pizzini, author J. Vogel,and author P. Gambardella, 10.1038/nmat2613 journal journal Nat. Mater. volume 9, pages 230 (year 2010)NoStop [Miron et al.(2011a)Miron, Garello, Gaudin, Zermatten, Costache, Auffret, Bandiera, Rodmacq, Schuhl, and Gambardella]Miron2011b author author I. M. Miron, author K. Garello, author G. Gaudin, author P.-J. Zermatten, author M. V. Costache, author S. Auffret, author S. Bandiera, author B. Rodmacq, author A. Schuhl,and author P. Gambardella, 10.1038/nature10309 journal journal Nature volume 476, pages 189 (year 2011a)NoStop [Miron et al.(2011b)Miron, Moore, Szambolics, Buda-Prejbeanu, Auffret, Rodmacq, Pizzini, Vogel, Bonfim, Schuhl, and Gaudin]Miron2011 author author I. M. Miron, author T. Moore, author H. Szambolics, author L. D. Buda-Prejbeanu, author S. Auffret, author B. Rodmacq, author S. Pizzini, author J. Vogel, author M. Bonfim, author A. Schuhl,and author G. Gaudin, 10.1038/nmat3020 journal journal Nat. Mater. volume 10, pages 419 (year 2011b)NoStop [Fang et al.(2011)Fang, Kurebayashi, Wunderlich, Výborný, Zârbo, Campion, Casiraghi, Gallagher, Jungwirth, and Ferguson]Fang2011 author author D. Fang, author H. Kurebayashi, author J. Wunderlich, author K. Výborný, author L. P. Zârbo, author R. P. Campion, author A. Casiraghi, author B. L. Gallagher, author T. Jungwirth,and author A. J. Ferguson, 10.1038/nnano.2011.68 journal journal Nat. Nanotechnol. volume 6, pages 413 (year 2011)NoStop [Liu et al.(2012a)Liu, Lee, Gudmundsen, Ralph, and Buhrman]Liu2012c author author L. Liu, author O. J. Lee, author T. J. Gudmundsen, author D. C. Ralph,andauthor R. A. Buhrman, 10.1103/PhysRevLett.109.096602 journal journal Phys. Rev. Lett. volume 109, pages 096602 (year 2012a)NoStop [Liu et al.(2012b)Liu, Pai, Li, Tseng, Ralph, and Buhrman]Liu2012 author author L. Liu, author C.-F. Pai, author Y. Li, author H. W. Tseng, author D. C. Ralph,and author R. A. Buhrman, 10.1126/science.1218197 journal journal Science volume 336, pages 555 (year 2012b)NoStop [Emori et al.(2013)Emori, Bauer, Ahn, Martinez, andBeach]Emori2013 author author S. Emori, author U. Bauer, author S.-M. Ahn, author E. Martinez,and author G. S. D. Beach, 10.1038/nmat3675 journal journal Nat. Mater.volume 12, pages 611 (year 2013)NoStop [Ryu et al.(2013)Ryu, Thomas, Yang, and Parkin]Ryu2013 author author K.-S. Ryu, author L. Thomas, author S.-H. Yang,andauthor S. Parkin, 10.1038/nnano.2013.102 journal journal Nat. Nanotechnol. volume 8, pages 527 (year 2013)NoStop [Garello et al.(2013)Garello, Miron, Avci, Freimuth, Mokrousov, Blügel, Auffret, Boulle, Gaudin,and Gambardella]Garello2013 author author K. Garello, author I. M. Miron, author C. O. Avci, author F. Freimuth, author Y. Mokrousov, author S. Blügel, author S. Auffret, author O. Boulle, author G. Gaudin,and author P. Gambardella, 10.1038/nnano.2013.145 journal journal Nat. Nanotechnol. volume 8, pages 587 (year 2013)NoStop [Haazen et al.(2013)Haazen, Murè, Franken, Lavrijsen, Swagten, and Koopmans]Haazen2013 author author P. P. J.Haazen, author E. Murè, author J. H. Franken, author R. Lavrijsen, author H. J. M. Swagten,andauthor B. Koopmans, 10.1038/nmat3553 journal journal Nat. Mater. volume 12, pages 299 (year 2013)NoStop [Fan et al.(2013)Fan, Wu, Chen, Jerry, Zhang, and Xiao]Fan2013 author author X. Fan, author J. Wu, author Y. Chen, author M. J. Jerry, author H. Zhang,and author J. Q. Xiao, 10.1038/ncomms2709 journal journal Nat. Commun. volume 4, pages 1799 (year 2013)NoStop [Kurebayashi et al.(2014)Kurebayashi, Sinova, Fang, Irvine, Skinner, Wunderlich, Novák, Campion, Gallagher, Vehstedt, Zârbo, Výborný, Ferguson, and Jungwirth]Kurebayashi2014 author author H. Kurebayashi, author J. Sinova, author D. Fang, author A. C. Irvine, author T. D. Skinner, author J. Wunderlich, author V. Novák, author R. P. Campion, author B. L. Gallagher, author E. K. Vehstedt, author L. P. Zârbo, author K. Výborný, author A. J. Ferguson,and author T. Jungwirth, 10.1038/nnano.2014.15 journal journal Nat. Nanotechnol. volume 9, pages 211 (year 2014)NoStop [Emori et al.(2014)Emori, Martinez, Lee, Lee, Bauer, Ahn, Agrawal, Bono, and Beach]Emori2014 author author S. Emori, author E. Martinez, author K.-J. Lee, author H.-W. Lee, author U. Bauer, author S.-M. Ahn, author P. Agrawal, author D. C.Bono,and author G. S. D.Beach, 10.1103/PhysRevB.90.184427 journal journal Phys. Rev. B volume 90, pages 184427 (year 2014)NoStop [Tatara et al.(2013)Tatara, Nakabayashi, and Lee]Tatara2013 author author G. Tatara, author N. Nakabayashi,and author K.-J. Lee, 10.1103/PhysRevB.87.054403 journal journal Phys. Rev. B volume 87, pages 054403 (year 2013)NoStop [Ciccarelli et al.(2014)Ciccarelli, Hals, Irvine, Novak, Tserkovnyak, Kurebayashi, Brataas, and Ferguson]Ciccarelli2014 author author C. Ciccarelli, author K. M. D. Hals, author A. Irvine, author V. Novak, author Y. Tserkovnyak, author H. Kurebayashi, author A. Brataas,and author A. Ferguson, 10.1038/nnano.2014.252 journal journal Nat. Nanotechnol. volume 10, pages 50 (year 2014)NoStop [Hals and Brataas(2015)]Hals2015 author author K. M. D.Hals and author A. Brataas, 10.1103/PhysRevB.91.214401 journal journal Phys. Rev. B volume 91, pages 214401 (year 2015),NoStop [Freimuth et al.(2015)Freimuth, Blügel, and Mokrousov]Freimuth2015 author author F. Freimuth, author S. Blügel,and author Y. Mokrousov, 10.1103/PhysRevB.92.064415 journal journal Phys. Rev. B volume 92, pages 064415 (year 2015)NoStop [Verba et al.(2015)Verba, Tiberkevich, and Slavin]Verba2015 author author R. Verba, author V. Tiberkevich,and author A. Slavin, 10.1063/1.4931089 journal journal Appl. Phys. Lett. volume 107, pages 112402 (year 2015)NoStop [Mühlbauer et al.(2009)Mühlbauer, Binz, Jonietz, Pfleiderer, Rosch, Neubauer, Georgii, and Böni]Muhlbauer2009a author author S. Mühlbauer, author B. Binz, author F. Jonietz, author C. Pfleiderer, author A. Rosch, author A. Neubauer, author R. Georgii,and author P. Böni, 10.1126/science.333.6048.1381-b journal journal Science volume 323, pages 915 (year 2009), NoStop [Roessler et al.(2006)Roessler, Bogdanov, Pfleiderer, andRößler]Rossler2006 author author U. K. Roessler, author A. N. Bogdanov, author C. Pfleiderer,and author U. K. Rößler, http://dx.doi.org/10.1038/nature05056 http://www.nature.com/nature/journal/v442/n7104/suppinfo/nature05056_S1.html journal journal Nature volume 442, pages 797 (year 2006)NoStop [Yu et al.(2010)Yu, Onose, Kanazawa, Park, Han, Matsui, Nagaosa, andTokura]Yu2010 author author X. Z. Yu, author Y. Onose, author N. Kanazawa, author J. H. Park, author J. H. Han, author Y. Matsui, author N. Nagaosa,and author Y. Tokura, http://dx.doi.org/10.1038/nature09124 http://www.nature.com/nature/journal/v465/n7300/suppinfo/nature09124_S1.html journal journal Nature volume 465, pages 901 (year 2010)NoStop [Jonietz et al.(2010)Jonietz, Mühlbauer, Pfleiderer, Neubauer, Münzer, Bauer, Adams, Georgii, Böni, Duine, Everschor, Garst, and Rosch]Jonietz2010 author author F. Jonietz, author S. Mühlbauer, author C. Pfleiderer, author A. Neubauer, author W. Münzer, author A. Bauer, author T. Adams, author R. Georgii, author P. Böni, author R. A. Duine, author K. Everschor, author M. Garst,and author A. Rosch, 10.1126/science.1195709 journal journal Science volume 330, pages 1648 (year 2010)NoStop [Adams et al.(2011)Adams, Mühlbauer, Pfleiderer, Jonietz, Bauer, Neubauer, Georgii, Böni, Keiderling, Everschor, Garst, and Rosch]Adams2011 author author T. Adams, author S. Mühlbauer, author C. Pfleiderer, author F. Jonietz, author A. Bauer, author A. Neubauer, author R. Georgii, author P. Böni, author U. Keiderling, author K. Everschor, author M. Garst,and author A. Rosch, 10.1103/PhysRevLett.107.217206 journal journal Phys. Rev. Lett. volume 107, pages 217206 (year 2011)NoStop [Heinze et al.(2011)Heinze, von Bergmann, Menzel, Brede, Kubetzka, Wiesendanger, Bihlmayer, and Blügel]Heinze2011 author author S. Heinze, author K. von Bergmann, author M. Menzel, author J. Brede, author A. Kubetzka, author R. Wiesendanger, author G. Bihlmayer,and author S. Blügel, 10.1038/nphys2045 journal journal Nat. Phys.volume 7, pages 713 (year 2011)NoStop [Yu et al.(2011)Yu, Kanazawa, Onose, Kimoto, Zhang, Ishiwata, Matsui,and Tokura]Yu2011b author author X. Z. Yu, author N. Kanazawa, author Y. Onose, author K. Kimoto, author W. Z. Zhang, author S. Ishiwata, author Y. Matsui,and author Y. Tokura, http://dx.doi.org/10.1038/nmat2916 http://www.nature.com/nmat/journal/v10/n2/abs/nmat2916.html#supplementary-information journal journal Nat. Mater. volume 10, pages 106 (year 2011)NoStop [Tonomura et al.(2012)Tonomura, Yu, Yanagisawa, Matsuda, Onose, Kanazawa, Park, and Tokura]Tonomura2012 author author A. Tonomura, author X. Yu, author K. Yanagisawa, author T. Matsuda, author Y. Onose, author N. Kanazawa, author H. S.Park,and author Y. Tokura, 10.1021/nl300073m journal journal Nano Lett. volume 12, pages 1673 (year 2012)NoStop [Huang and Chien(2012)]Huang2012 author author S. X. Huang and author C. L. Chien, 10.1103/PhysRevLett.108.267201 journal journal Phys. Rev. Lett. volume 108, pages 267201 (year 2012)NoStop [Kanazawa et al.(2012)Kanazawa, Kim, Inosov, White, Egetenmeyer, Gavilano, Ishiwata, Onose, Arima, Keimer, and Tokura]Kanazawa2012 author author N. Kanazawa, author J. H. Kim, author D. S. Inosov, author J. S. White, author N. Egetenmeyer, author J. L. Gavilano, author S. Ishiwata, author Y. Onose, author T. Arima, author B. Keimer,and author Y. Tokura, 10.1103/PhysRevB.86.134425 journal journal Phys. Rev. B - Condens. Matter Mater. Phys. volume 86, pages 1 (year 2012)NoStop [Romming et al.(2013)Romming, Hanneken, Menzel, Bickel, Wolter, von Bergmann, Kubetzka, and Wiesendanger]Romming2013 author author N. Romming, author C. Hanneken, author M. Menzel, author J. E. Bickel, author B. Wolter, author K. von Bergmann, author A. Kubetzka,and author R. Wiesendanger, 10.1126/science.1240573 journal journal Science volume 341, pages 636 (year 2013)NoStop [Müller et al.(2016)Müller, Rosch, and Garst]Muller2016a author author J. Müller, author A. Rosch,and author M. Garst, 10.1088/1367-2630/18/6/065006 journal journal New J. Phys. volume 18, pages 065006 (year 2016), NoStop [Thiaville et al.(2012)Thiaville, Rohart, Jué, Cros, and Fert]Thiaville2012 author author A. Thiaville, author S. Rohart, author É. Jué, author V. Cros,and author A. Fert, 10.1209/0295-5075/100/57002 journal journal Europhys. Lett. volume 100, pages 57002 (year 2012),NoStop [Rohart and Thiaville(2013)]Rohart2013 author author S. Rohart and author A. Thiaville, 10.1103/PhysRevB.88.184422 journal journal Phys. Rev. B volume 88, pages 184422 (year 2013)NoStop [Pyatakov et al.(2015)Pyatakov, Sergeev, Mikailzade, andZvezdin]Pyatakov2015 author author A. Pyatakov, author A. Sergeev, author F. Mikailzade,andauthor A. Zvezdin, 10.1016/j.jmmm.2014.11.035 journal journal J. Magn. Magn. Mater. volume 383,pages 255 (year 2015)NoStop [Sergienko and Dagotto(2006)]Sergienko2006 author author I. A. Sergienko and author E. Dagotto, 10.1103/PhysRevB.73.094434 journal journal Phys. Rev. B volume 73, pages 094434 (year 2006)NoStop [Seki et al.(2012)Seki, Yu, Ishiwata, and Tokura]Seki2012 author author S. Seki, author X. Z. Yu, author S. Ishiwata,andauthor Y. Tokura, 10.1126/science.1214143 journal journal Science volume 336, pages 198 (year 2012)NoStop [Rojac et al.(2016)Rojac, Bencan, Drazic, Sakamoto, Ursic, Jancar, Tavcar, Makarovic, Walker, Malic,and Damjanovic]Rojac2016 author author T. Rojac, author A. Bencan, author G. Drazic, author N. Sakamoto, author H. Ursic, author B. Jancar, author G. Tavcar, author M. Makarovic, author J. Walker, author B. Malic,and author D. Damjanovic, 10.1038/nmat4799 journal journal Nat. Mater. volume 16, pages 322 (year 2016)NoStop [Dzyaloshinsky(1958)]Dzyaloshinsky1958 author author I. Dzyaloshinsky, 10.1016/0022-3697(58)90076-3 journal journal J. Phys. Chem. Solids volume 4, pages 241 (year 1958)NoStop [Moriya(1960)]Moriya1960a author author T. Moriya, 10.1103/PhysRevLett.4.228 journal journal Phys. Rev. Lett. volume 4,pages 228 (year 1960)NoStop [Landau et al.(1984)Landau, Pitaevskii, and Lifshitz]Landau1984 author author L. D. Landau, author L. P. Pitaevskii,and author E. M. Lifshitz, @nooptitle Electrodynamics of Continuous Media, Course of Theoretical Physics Vol. 8 (publisher Pergamon, Oxford, year 1984)NoStop [Meynell et al.(2014)Meynell, Wilson, Fritzsche, Bogdanov, and Monchesky]Meynell2014 author author S. A. Meynell, author M. N. Wilson, author H. Fritzsche, author A. N. Bogdanov,and author T. L. Monchesky, 10.1103/PhysRevB.90.014406 journal journal Phys. Rev. B volume 90, pages 014406 (year 2014),NoStop [Rybakov et al.(2013)Rybakov, Borisov, and Bogdanov]Rybakov2013 author author F. N. Rybakov, author A. B. Borisov,and author A. N. Bogdanov, 10.1103/PhysRevB.87.094424 journal journal Phys. Rev. Bvolume 87, pages 094424 (year 2013), NoStop [Wilson et al.(2013)Wilson, Karhu, Lake, Quigley, Meynell, Bogdanov, Fritzsche, Rößler, and Monchesky]Wilson2013 author author M. N. Wilson, author E. A. Karhu, author D. P. Lake, author A. S. Quigley, author S. Meynell, author A. N. Bogdanov, author H. Fritzsche, author U. K.Rößler,and author T. L. Monchesky, 10.1103/PhysRevB.88.214420 journal journal Phys. Rev. B volume 88, pages 214420 (year 2013), NoStop [Leonov et al.(2015)Leonov, Togawa, Monchesky, Bogdanov, Kishine, Kousaka, Miyagawa, Koyama, Akimitsu, Koyama, Harada, Mori, McGrouther, Lamb, Krajnak, McVitie, Stamps, and Inoue]Leonov2015 author author A. O. Leonov, author Y. Togawa, author T. L. Monchesky, author A. N. Bogdanov, author J. Kishine, author Y. Kousaka, author M. Miyagawa, author T. Koyama, author J. Akimitsu, author T. Koyama, author K. Harada, author S. Mori, author D. McGrouther, author R. Lamb, author M. Krajnak, author S. McVitie, author R. L. Stamps,and author K. Inoue, 10.1103/PhysRevLett.117.087202 journal journal Phys. Rev. Lett. volume 117, pages 087202 (year 2015), NoStop [Leonov and Mostovoy(2017)]Leonov2016c author author A. O. Leonov and author M. Mostovoy, 10.1038/ncomms14394 journal journal Nat. Commun. volume 8,pages 14394 (year 2017), NoStop [Birss(1964)]Birss1964 author author R. R. Birss, @nooptitle Symmetry and magnetism, Vol. volume 863 (publisher North-Holland Amsterdam, year 1964)NoStop [Moreau-Luchaire et al.(2016)Moreau-Luchaire, Moutafis, Reyren, Sampaio, Vaz, Van Horne, Bouzehouane, Garcia, Deranlot, Warnicke, Wohlhüter, George, Weigand, Raabe, Cros, and Fert]Moreau-Luchaire2016 author author C. Moreau-Luchaire, author C. Moutafis, author N. Reyren, author J. Sampaio, author C. a. F. Vaz, author N. Van Horne, author K. Bouzehouane, author K. Garcia, author C. Deranlot, author P. Warnicke, author P. Wohlhüter, author J.-M. George, author M. Weigand, author J. Raabe, author V. Cros,and author A. Fert, 10.1038/nnano.2015.313 journal journal Nat. Nanotechnol. volume 11, pages 444 (year 2016), NoStop [Kent et al.(1999)Kent, Rüdiger, Yu, Thomas,and Parkin]Kent1999 author author A. D. Kent, author U. Rüdiger, author J. Yu, author L. Thomas,and author S. S. P. Parkin, 10.1063/1.369956 journal journal J. Appl. Phys. volume 85, pages 5243 (year 1999), NoStop [Brataas et al.(1999)Brataas, Tatara, and Bauer]Brataas1999a author author A. Brataas, author G. Tatara, and author G. E. W. Bauer,10.1103/PhysRevB.60.3406 journal journal Phys. Rev. B volume 60, pages 3406 (year 1999)NoStop [Chiba et al.(2006)Chiba, Yamanouchi, Matsukura, Dietl, and Ohno]Chiba2006 author author D. Chiba, author M. Yamanouchi, author F. Matsukura, author T. Dietl,and author H. Ohno, 10.1103/PhysRevLett.96.096602 journal journal Phys. Rev. Lett. volume 96, pages 096602 (year 2006),NoStop
http://arxiv.org/abs/1706.09026v2
{ "authors": [ "Kjetil M. D. Hals", "Karin Everschor-Sitte" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170627194752", "title": "Boundary-Driven Twist States in Systems with Broken Spatial Inversion Symmetry" }
[pages=1-last]vcubeps_input.pdf
http://arxiv.org/abs/1706.08302v2
{ "authors": [ "João Paulo de Araujo", "Luciana Arantes", "Elias P. Duarte Jr.", "Luiz A. Rodrigues", "Pierre Sens" ], "categories": [ "cs.DC", "C.2.4" ], "primary_category": "cs.DC", "published": "20170626095712", "title": "VCube-PS: A Causal Broadcast Topic-based Publish/Subscribe System" }
Submitted to IEEE Transactions on Automatic Control in May 2017Energy-Preserving and Passivity-Consistent Numerical Discretization of Port-Hamiltonian Systems Elena Celledoni and Eirik Hoel HøisethThis work has received funding from the European Unions Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 691070, and from The Research Council of Norway.Elena Celledoni is currently with the Department of Mathematical Sciences at the Norwegian University of Science and Technology, Trondheim, 7491, Norway (e-mail: [email protected]).Eirik H. Høiseth (corresponding author) is currently with the Department of Mathematical Sciences at the Norwegian University of Science and Technology, Trondheim, 7491, Norway (e-mail: [email protected]). December 30, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ In this paper we design discrete port-Hamiltonian systems systematically in two different ways, by applying discrete gradient methods and splitting methods respectively. The discrete port-Hamiltonian systems we get satisfy a discrete notion of passivity, which lets us, by choosing the input appropriately, make them globally asymptotically stable with respect to an equilibrium point. We test methods designed using the discrete gradient approach in numerical experiments, and the results are encouraging when compared to relevant existing integrators of identical order. Asymptotic stability, discrete gradient methods, discrete port-Hamiltonian systems, energy balance, geometric numerical integration, interconnection, numerical integration methods, passivity, splitting methods, structure preserving algorithms. § INTRODUCTIONPort-Hamiltonian systems are a recent and increasingly popular approach to modelling complex physical and engineering systems. This approach merges network theory with geometry and control. From network theory comes the concept of port-based modelling, which allows for the modelling of complex systems, stretching over multiple physical domains. This is done by viewing the full system as a set of a (possibly large) number of simple ideal subsystems that are interconnected and communicate through the exchange of energy. Paynter pioneered this approach in <cit.>.From geometric mechanics there is a focus on the underlying geometric structure of the system, see <cit.><cit.><cit.>. Port-Hamiltonian systems represent a generalization of traditional Hamiltonian mechanics. Unlike traditional Hamiltonian mechanics, where the key geometry is that the phase space is endowed with a symplectic structure, the geometry of port-Hamiltonian systems comes from the interconnection structure of the system. The appropriate structure then appears to be a Dirac structure, a generalization of both symplectic and Poisson structures that was first introduced in <cit.>. Its use in port based modelling was first explored in <cit.><cit.>. An essential property of Dirac structures is that their appropriate composition again constitutes a Dirac structure. This ensures that interconnecting multiple port-Hamiltonian systems into a larger such system preserves this geometry.Port-Hamiltonian systems can interact with their environment, and consequently the theory of control systems feature prominently.For our purposes the relevant example is interaction through inputs and outputs. Port-Hamiltonian systems can also be viewed as a technique for control design <cit.><cit.><cit.>, e.g. by shaping the system energy or viewing controllers as virtual system components. A thorough introduction to port-Hamiltonian systems can be found in <cit.>. In this paper we are concerned with the preservation of the remarkable properties of port-Hamiltonian system under numerical discretisation. We focus in particular on the energy balance and on the stability under interconnection. We will see that these properties are not automatically satisfied when replacing a continuous port-Hamiltonian system with its discrete counterpart obtained by applying a numerical discretisation method. And we propose two numerical approaches that will guarantee this preservation. In geometric numerical integration, one seeks numerical integration methods preserving the structure of the flow one wishes to integrate <cit.>. For Hamiltonian mechanics, particularly for the unconstrained case where the configuration space is linear, there is a rich theory of structure preserving integrators: notably symplectic integrators <cit.><cit.><cit.><cit.><cit.> and energy-preserving, symmetric integrators <cit.><cit.><cit.><cit.>.For port-Hamiltonian systems, structure preserving integration is far less explored. We restrict ourselves to the class of input-state-output port-Hamiltonian systems, and propose two approaches to construct discrete port-Hamiltonian systems. Our discrete models arise from the structure-preserving integration of their continuous counterparts. We analyse these methods, focusing in particular on discreteenergy-preserving and passivity-preserving interconnection of simpler systems. The structure-preserving (and in particular passivity-preserving) integration of these systems is of interest both from a theoretical perspective and in engineering applications. See <cit.> for an application of passivity preserving splitting methods to the control of marine vessels.The structure of the paper is as follows. In Section <ref> we give relevant background theory on continuous input-state-output port-Hamiltonian systems, and how they can be interconnected. In Section <ref> we consider the problem of numerically discretizing such port-Hamiltonian systems while preserving a discrete analogue of passivity. This reduces to energy preservation when the input is zero. Section <ref> is devoted to numerical experiments. Finally we make some concluding remarks in Section <ref>. A higher order generalization of the method given in Section <ref> is derived in the Appendix.§ BACKGROUND THEORY From the perspective of geometric mechanics, an input-state-output port-Hamiltonian system may be naturally introduced as a generalization of a traditional Hamiltonian mechanical system. In the absence of dissipative elements the following system of ordinary differential equations (ODEs) constitutes an input-state-output port-Hamiltonian system: rClẋ = B(x)∇H(x)+ G(x)u, x(0) = x_0,y = G(x)^T∇H (x),where x∈^n is the state, u∈^m the input and y∈^m the output. Furthermore B(x) is a skew-symmetric matrix (often, but not always, B(x) defines a Poisson bracket), H is the Hamiltonian function and ∇ H (x) is the gradient of H with respect to x. The input u is given as a function of t, x or y. We will usually take u = u(y) reflecting the intuitive notion that the input often can only depend on the observable part of the system. The uncontrolled system, ẋ=B(x) ∇ H(x), is assumed to have an isolated equilibrium point x=x^*. Since the change of coordinates x ↦ x-x^* will always move this equilibrium point to the origin, there is no loss of generality in taking x^*=0. §.§ PassivityConsider initially the general system of differential equationsrClẋ=f(x,u), y=h(x),with state x, input u and output y. f: ^n ×^m ↦^n is assumed to be locally Lipschitz with f(0,0) = 0 and h : ^n ↦^m continuous with h(0) = 0.A common definition of passivity for such a system is the following from <cit.>:The system (<ref>) is passive if there exists a continuously differentiable positive semidefinite function V(x), called the storage function, such that cu^Ty ≥V̇ =∇V(x)^T f(x,u), ∀(x,u) ∈^n ×^m. The system is said to be lossless if u^Ty = V̇. Integrating (<ref>) we get the integral version of this passivity inequality rCl⟨y,u ⟩_L^2:=∫_0^ty^Tuds ≥V(t)-V(0),∀t ≥0, x(0) ∈^n, u(s): [0,t] ↦^m.We return to our port-Hamiltonian system (<ref>), which is of the format (<ref>). Differentiating the energy H with respect to time, we obtain a differential equation for H cḢ=y^Tu,which states that the change in energy is equal to the work due to the external forces. This implies that the system (<ref>) is passive, specifically lossless, with H as the storage function. From the integral inequality (<ref>) a system is passive with respect to the energy H if it satisfies the inequality c⟨y,u ⟩_L^2 ≥H(t)-H(0).This means that such passive systems may consume and store energy, but are incapable of producing energy. For literature on the theory of passive systems see e.g. <cit.><cit.><cit.>. An important consequence of this property is that if a system is passive it is possible to achieve asymptotic stability of the system by adding appropriate damping.We will need the following two definitions from <cit.>: A system of the form (<ref>) iszero-state observable if no solution of ẋ=B(x) ∇ H(x) can stay identically in the set {y(x)=0 } other than the trivial solution x(t)≡ 0. A function H:^n ↦ is radially unboundedif H(x)→∞ as x→∞.Asymptotic stability, through the addition of appropriate damping, is given by the following theorem: If the passive system (<ref>) has a radially unbounded positive definite Hamiltonian H, and is zero-state observable, then the origin, x=0, can be globally stabilized by the input choice u=-ϕ(y) where ϕ: ^m ↦^m is a locally Lipschitz function with the additional properties ϕ(0)=0 and y^Tϕ(y)>0 for all y≠ 0. See <cit.>. §.§ InterconnectionAnother important property of port-Hamiltonian systems is that they are stable under interconnection. Given two port-Hamiltonian systems one can create a third one via a procedure calledinterconnection, which we describe briefly in what follows. Consider the two port-Hamiltonian systemsrClẋ = B(x)∇H(x)+ G(x)u,x(0) = x_0, y = G(x)^T∇H (x),and rClγ̇ = B_c(γ)∇H_c(γ)+ G_c(γ)u_c, γ(0) = γ_0, y_c = G_c(γ)^T∇H_c (γ).We join the two systems by interconnection by imposing the energy balance condition cy^Tu+y_c^Tu_c=0,which states that energy flowing out of one system through the ports flows into the other. A simple way to satisfy (<ref>) is to takecu=-y_c,u_c=y. Using (<ref>) we obtain the larger system rCl[[ẋ; γ̇ ] ]= C(x,γ)[ [∇H (x); ∇H_c(γ) ]],C(x,γ) := [[B(x) -G(x)G_c(γ)^T;G_c(γ)G(x)^TB_c(γ) ] ],with Hamiltonian H(x)+H_c(γ). The obtained system is port-Hamiltonian. Because of this property one says that port-Hamiltonian systems are stable under interconnection. Usually the first system is given, and the second system is designed to control the first one. This means that one should design H_c(γ) so that the new system is driven to the desired equilibrium state x^*, here again taken to be 0 without loss of generality. We note that the Casimirs of the larger system are also of importance, for the purpose of control design. See for example <cit.> for details. We also mention that if the skew-symmetric structure matrix C(x,γ) satisfies also the Jacobi identity, then C(x,γ) defines a Poisson bracket. §.§ Interconnection and Generalized Dirac Structures Let 𝒬 be a smooth manifold and let T𝒬 and T^*𝒬 be its tangent and cotangent bundle respectively. Consider the smooth vector bundle over 𝒬T𝒬⊕ T^*𝒬,with fibres T_x𝒬× T_x^*𝒬.A generalised Dirac structure on 𝒬is a vector subbundle 𝒟⊂ T𝒬⊕ T^*𝒬,s.t.𝒟=𝒟^⊥,where rCl 𝒟^⊥ := {(e,f)∈T𝒬⊕T^*𝒬| ⟨e,f' ⟩+ ⟨e',f⟩=0,∀(e',f')∈𝒟}, and ⟨· , ·⟩ is the duality pairing between T_x𝒬 and T_x^*𝒬. If ℱ is an n-dimensional vector space, it can be shown that 𝒟⊂ℱ⊕ℱ^* is a (constant) Dirac structure on ℱ if and only if* ⟨ e , f⟩ =0 for all (e,f)∈𝒟. * dim(𝒟)=n.Symplectic structures induce Dirac structures on 𝒬. Let us denote with𝒳_H the Hamiltonian vector field with Hamiltonian H with respect to analmost-symplectic structure Ω on 𝒬 (Ω is a nondegenerate two-form on 𝒬 which is not necessarily closed),and let rCl 𝒟_Ω := { ( 𝒳_H,dH)∈T𝒬⊕T^*𝒬 |Ω(𝒳_H,·)=dH,H:𝒬→ℝ},then 𝒟_Ω is a generalized Dirac structure. In this sense Dirac structures are generalisations of symplectic structures.Let Ω̃ be the two form associated with the skew-symmetric matrix C in (<ref>). The interconnection of the two port-Hamiltonian systems (<ref>) and (<ref>) under the condition (<ref>) gives a larger system with state variables X=(x,γ), with energy H̃=H+H_c. The couple (Ẋ,dH̃) such thatẊ=C(X) ∇H̃(X), belongs to the Dirac structure 𝒟_Ω̃ induced by Ω̃.§ DISCRETE PORT-HAMILTONIAN SYSTEMS AND DISCRETE PASSIVITY BASED CONTROLIn this section we propose a definition of discrete port-Hamiltonian systems, see also <cit.>. For the numerical discretization of port-Hamiltonian systems we will focus on two important aspects: the preservation of a discrete energy balance equation and the stability under interconnection. §.§ Discrete Energy BalanceTo start we consider a general discrete systemcx_n+1=Φ(x_n),with the given initial state x_0 ∈ℝ^n. A function V: ℝ^n ↦ℝ is called a (discrete) Lyapunov function for (<ref>) on a set S ⊂ℝ^n if it is continuous on ℝ^n and Δ V(x) := V(Φ(x))-V(x) ≤ 0 for all x ∈ S. We require the following discrete Invariance Principle from <cit.>:If V is Lyapunov function for (<ref>) in S ⊂ℝ^n and the solution x_n of (<ref>) is in S and bounded, then there is number c such that x_n → M ∩ V^-1(c) ≠∅ where M is the largest positively invariant set contained in the set E = {x ∈ℝ^n: Δ V= 0 }∩S̅. Now let us consider a consistent numerical integration method x_n+1=Φ_h(x_n,u) of (<ref>) producing the approximations x_n≈ x(t_n) for t_n=nh with h the step size of integration. Clearly the solution will depend on the choice of input function u in (<ref>), which we here assume is given as a function of the output y, i.e. u = u(y). Assume the method x_n+1=Φ_h(x_n,u) produces m intermediate approximations of the output and of the input, Y_n := [y_n1,,y_nm] and U_n := [u_n1,,u_nm], with u_nj:=u(y_nj) andlim_h→ 0y_nj =y(x(t_n)), lim_h→ 0 u_nj = u(y(x(t_n))), j = 1,,m. We say the method satisfies a discrete energy balance equation if there exist positive weights b_j with ∑_j b_j = 1 such that rCl ΔH(x_n):=H(x_n+1)-H(x_n)= ⟨Y_n,U_n ⟩_L^2_h:=h∑_j=1^m b_jy_nj^Tu_nj, holds for arbitrary n and h.Note that lim_h → 0⟨ Y_n,U_n ⟩_L^2_h = ⟨ y,u ⟩_L^2. This property will be used to prove a discrete analogue of Theorem <ref>.Suppose the continuous system (<ref>) has a radially unbounded positive definite storage function H, and that the consistent numerical method x_n+1=Φ_h(x_n,u) for this system satisfies a discrete energy balance (<ref>). Furthermore assume that no solution sequence of x_n+1=Φ_h(x_n,0) gives zero output, i.e. {y_nj=0} for all n, j, except the trivial solution, x_n = 0, for all n (discrete zero-state observability). The origin, x=0, can then be globally stabilised with the choice of an appropriate control inputu=-ϕ(y), where ϕ is a locally Lipschitz function such that ϕ(0)=0 and y^Tϕ(y)>0 for all y≠ 0. From the discrete energy balance rCl ΔH(x_n)=⟨Y_n,U_n ⟩_L^2_h := h∑_j=1^m b_jy_nj^Tu_nj=-h∑_j=1^m b_jy_nj^Tϕ(y_nj) ≤0 ,where the last inequality follows from the properties of ϕ and the positiveness of the weights b_j. Since H is continuous on ℝ^n it is a (discrete) Lyapunov function on ℝ^n for the discrete method x_n+1=Φ_h(x_n,u). In addition, because H is radially unbounded, it follows that all solutions of this discrete system are bounded.Consequently from Proposition <ref> x_n → M ≠∅, where M is the largest positively invariant set contained in the set E = {x ∈ℝ^n: Δ H(x) = 0}. Thus if M = {0} the origin will be globally asymptotically stable.Now, from the above calculations Δ H(x_n) = 0 implies y_nj = 0 for all j, which from the properties of ϕ implies that u_nj = 0 for all j. The zero-state observability requirement yields x_n = 0 for n ∈ℕ, and consequently M = {0}.§.§ Discrete GradientMethods A discrete gradient ∇̅ H: ℝ^n×ℝ^n →ℝ^n is an approximation of the gradient of a function H:ℝ^n→ℝ, satisfying the following two properties: * ∇̅ H(x,x')^T(x'-x)=H(x')-H(x),* ∇̅ H(x,x)= ∇ H(x). We consider the following consistent numerical discretization of (<ref>): rCl x_n+1-x_n/h = B̃(x_n,x_n+1)∇̅ H(x_n,x_n+1) + G̃(x_n,x_n+1) ũ_n,where we define ũ_n := u(ỹ_n), and the discrete output ỹ_n is defined to be cỹ_n := G̃(x_n,x_n+1)^T∇̅H(x_n,x_n+1).Here ∇̅ H(x_n,x_n+1) is a discrete gradient, B̃ and G̃ depend on x_n and x_n+1 continuously, and are consistent discretizations, e.g. lim_h → 0B̃(x_n,x_n+1) = B(x_n) in the case of B, and, in addition, B̃(x_n,x_n+1) is assumed to be skew-symmetric. From the first property of discrete gradientsone easily verifies that a discrete energy balance equation is satisfied. In fact rCl 3l H(x_n+1)-H(x_n) = h∇̅ H(x_n,x_n+1)^TB̃(x_n,x_n+1)∇̅ H(x_n,x_n+1)+ h∇̅ H(x_n,x_n+1)^TG̃(x_n,x_n+1)ũ_n= hỹ_n^Tũ_n. It is easy to verify that the hypotesis of Theorem <ref> hold for the discrete passive systems of the form (<ref>)-(<ref>), with m=1, y_n1 := ỹ_n, u_n1 := ũ_n and b_1 = 1.One choice for the discrete gradient is the averaged vector field (AVF) discrete gradient rCl ∇̅ H(x_n,x_n+1) ≡ ∫_0^1∇H(ρ(α)) dα,ρ(τ)=x_n(1-τ)+x_n+1τ,Note that we have rCl H(x_n+1)-H(x_n) = ∫_0^1∇H(ρ(α))^Tρ̇(α) dα = ∫_0^1∇H(p(α))^Tdα(x_n+1-x_n)=∇̅ H(x_n,x_n+1)^T(x_n+1-x_n), ∇̅ H(x_n,x_n) = ∫_0^1∇H(x_n) dα=∇H(x_n),so this is a discrete gradient. If in addition we use the value at the midpoint x_n+1/2=x_n+x_n+1/2 to approximate B and G, i.e. B̃(x_n,x_n+1) = B(x_n+1/2), G̃(x_n,x_n+1) = G(x_n+1/2), we get a second order method for (<ref>) of the format (<ref>).rCl x_n+1-x_n/h =B(x_n+1/2)∫_0^1∇H(ρ(α)) dα +G(x_n+1/2) ũ_n, If H is a polynomial of the components of x, then the integral in (<ref>) can be explicitly computed. In particular for quadratic H and linear ∇ H, one finds that (<ref>) coincides with the midpoint method. This explains the behaviour observed in <cit.>, where the authors show that the midpoint method is energy-preserving for linear port-Hamiltonian systems. This property of the midpoint rule ceases to hold if H is a polynomial function of higher degree than quadratic, see numerical experiments in Section <ref>. Generalisations of (<ref>) to higher order can be easily obtained using the ideas of <cit.>. See the Appendix for details. §.§ Interconnection and preservation of the generalised Dirac structureWe consider the interconnection of two discrete port-Hamiltonian systems of the form (<ref>)-(<ref>)rCl x_n+1-x_n/h = B(x_n+1/2)∇̅ H(x_n,x_n+1)+ G(x_n+1/2)u_n+1/2, y_n+1/2 = G(x_n+1/2)^T∇̅ H(x_n,x_n+1),rCl γ_n+1-γ_n/h = B_c(γ_n+1/2)∇̅ H_c(γ_n,γ_n+1)+ G_c(γ_n+1/2)u_c,n+1/2, y_c,n+1/2 = G_c(γ_n+1/2)^T∇̅ H_c(γ_n,γ_n+1),under theinterconnection conditiony_n+1/2^Tu_n+1/2+y_c,n+1/2^Tu_c,n+1/2=0,which we satisfy by imposing u_n+1/2:=-y_c,n+1/2, u_c,n+1/2:=y_n+1/2. We obtain a larger discrete system [[ x_n+1-x_n/h; γ_n+1-γ_n/h ]] =C(x_n+1/2,γ_n+1/2) [ [ ∇̅ H(x_n,x_n+1); ∇̅ H_c(γ_n,γ_n+1) ]],with C(x,γ) given in (<ref>). By the skew-symmetry of C(x,γ), and the properties of discrete gradients, the obtained discrete system preserves the energy H̃(x,γ)=H(x)+H_c(γ). In fact, rCl 3l H̃(x_n+1,γ_n+1)-H̃(x_n,γ_n)=∇̅ H(x_n,x_n+1)^T(x_n+1-x_n) + ∇̅ H_c(γ_n,γ_n+1)^T(γ_n+1-γ_n)=0.Using the definition (<ref>), we consider the constant Dirac structures 𝒟_Ω̃_n, where Ω̃_n is the two-form associated with the skew-symmetric matrix C(x_n+1/2,γ_n+1/2). The couples of (time) discrete vector field and discrete gradient obtained by interconnection of the two discrete port-Hamiltonian systems and given by([[ x_n+1-x_n/h; γ_n+1-γ_n/h ]], [ [ ∇̅ H(x_n,x_n+1); ∇̅ H_c(γ_n,γ_n+1) ]]), n=0,1,2,… belong to 𝒟_Ω̃_n for all n. We can view 𝒟_Ω̃_n for n=0,1,2,… as a time-discreteapproximation of the Dirac structure 𝒟_Ω̃ considered at the end of Section <ref>. §.§ Splitting Methods We can also consider a splitting method. Assume the skew-symmetric matrix B in (<ref>) permits the splitting B(x) = B_1(x) + B_2(x) where B_1(x) andB_2(x) are again both skew-symmetric. Using this matrix splitting to split the vector field of (<ref>), pushing the control part into the second system, we have rCl S_1 :{[ẋ= B_1(x)∇H(x), ].S_2 :{[ ẋ = B_2(x)∇H(x) + G(x)u(y).; ].with the normal output y=G(x)^T∇ H (x).Let the flow maps that advance the system some time t forward along S_1 and S_2 be denoted Φ_t^[S_1] and Φ_t^[S_2] respectively. Now suppose we apply a splitting method rCl x_n+1=Φ_a_1h^[S_2]∘Φ_b_1h^[S_1]∘Φ_a_2h^[S_2] ∘⋯∘Φ_a_m+1h^[S_2]∘ ⋯∘Φ_b_1h^[S_1]∘Φ_a_1h^[S_2](x_n), to the full system (<ref>). Here we assume that all coefficients a_i and b_i are non-negative, and that the method is consistent, i.e.2∑_i=1^m a_i + a_m+1 = 2∑_i=1^m b_i = 1.This implies that the method has a well defined numerical flow Φ_τ with the property Φ_2h(x_n) = x_n+1. This limits us to second order methods, as higher order splitting methods (with real coefficients) must have some stricly negative coefficients <cit.>. Let the splitting method (<ref>) be applied to the splitting (<ref>) of a system (<ref>) with radially unbounded, continuous and positive definite storage function H. If no solution of S_2 with u=0 can stay in the set y=0 other than the trivial solution x=0, then the origin x=0 for the full system (<ref>) can be globally stabilised with the choice of an appropriate control input u(y)=-ϕ(y). Here ϕ is a locally Lipschitz function such that ϕ(0)=0 and y^Tϕ(y)>0 for all y≠ 0. Consider an arbitrary step from x_n to x_n+1 for n ∈ℕ. We apply 4m+1 numerical flows alternating between Φ_t^[S_1] and Φ_t^[S_2]. Let x̃_k be the point we have reached after applying k of these flows, e.g. x̃_0 = x_n and x̃_4m+1 = x_n+1. From (<ref>) it is clear that on [x̃_k,x̃_k+1] we are flowing along Φ_t^[S_1] if k is odd and Φ_t^[S_2] if k is even. It is also clear that for the system S_1, Ḣ = 0 and for S_2, Ḣ = -y^Tϕ(y) ≤ 0. Consequently H(x̃_k+1)-H(x̃_k) = 0 if k is odd and H(x̃_k+1)-H(x̃_k) ≤ 0 if k is even. Thus rCl ΔH(x_n):=H(x_n+1)-H(x_n) := H(x̃_4m+1)-H(x̃_0)=∑_k=0^4m H(x̃_k+1)-H(x̃_k) ≤0.From an identical argument as in Theorem <ref> x_n → M ≠∅, where M is the largest positively invariant set contained in the set E = {x ∈ℝ^n: Δ H(x) = 0}. As before the origin will therefore be globally asymptotically stable if M = {0}.Now, from the above calculations Δ H(x_n) = 0 implies y = 0 while flowing along S_2, i.e. [x̃_k,x̃_k+1] with k even. From the properties of ϕ this implies that u = 0 here. The zero-state observability requirement then yields x = 0 here, which means x̃_k = 0 and thus x_n = 0 for n ∈ℕ. Consequently M = {0}.§.§ Discrete Energy Balance and Runge-Kutta Methods It can be easily shown that if the Hamiltonian is a polynomial function of the components of x, and the structure matrix B does not depend on x, then applying the method (<ref>) to (<ref>) results in a Runge-Kutta method, see <cit.>. This shows that if we restrict to polynomial Hamiltonian functions there exist Runge-Kutta methods which satisfy a discrete energy balance equation. A concrete example is given by the midpoint method applied to problems with constant B and quadratic Hamiltonians, resulting in a linear port-Hamiltonian system. See Section IIIB in <cit.>.However, without such restrictions, this is not possible.No Runge-Kutta method satisfies (<ref>) for general Hamiltonian functions H. The proof is very similar to the proof of Proposition 4 in <cit.>. Consider a system of type (<ref>) with x(t)=[q(t),p(t)]^T with the degenerate Hamiltonian function H=p-F(q) and input from a derivative controller u=[0,u̅q̇]^T (u̅ the constant controller gain). We define f(q):=∂ F/∂ q. Let B be the constant 2× 2 Darboux matrix, and G=I the 2× 2 identity matrix.The equations (<ref>) for this system arerCl q̇ = 1,ṗ = f(q)+u̅,y = ∇H,where y is the output. All B-series methods (including all Runge-Kutta methods)over one step with initial conditionq_0 give q_1=q_0+h. Energy consistency according to(<ref>) requires c H(q_1,p_1)-H(q_0,p_0)=⟨Y_n,U_n ⟩_L^2_h ≈∫_0^h y^Tuds=hu̅,and we observe that any consistent approximation ⟨ Y_n,U_n ⟩_L^2_h≈∫_0^h y^Tuds=hu̅ would reproduce this integral exactly. From this we get rCl p_1 = p_0+(F(q_1)-F(q_0))+⟨Y_n,U_n ⟩_L^2_h= p_0+∫_0^hf(q)ds+hu̅.On the other hand a Runge-Kutta method would give an approximation p_1 in the formp_1=p_0+h∑_i=1^sb_if(c_ih,q_0+c_ih)+hu̅.This leads to the conditionh∑_i=1^sb_if(c_ih,q_0+c_ih)=F(q_1)-F(q_0)=∫_0^hf(q)ds,for the Runge-Kutta method, which can be satisfied for an arbitrary F only ifall quadrature conditions∑_i=1^sb_ic_i^k-1=1/k,k=1,2,… are satisfied by the Runge-Kutta method. § NUMERICAL EXPERIMENTS The numerical experiments focus on the introduced discrete gradient methods. See <cit.> for an application of passivity preserving splitting methods. §.§ Controlled Rigid BodyIn the first numerical experiment we illustrate the preservation of the discrete energy balance equation for the method (<ref>), and see how the method achieves a correct energy exchange between external power and internal energy. The test problem is a controlled rigid body spinning around its center of mass. The kinetic energy is H(ω,q)=1/2(I_1 ω_1^2+I_2 ω_2^2+I_3 ω_3^2)+1/2q^Tq, with ω∈ℝ^3 the angular velocity and q∈ℝ^4 the unit quaternion representing the attitude rotation of the body,and the equations are rCl𝕀ω̇ = -ω̂ ∇_ω H (ω,q)+u, u := -K_d∇_ω H (ω,q)-K_p∇_q H(ω,q), q̇ = [ [ 0 0^T; 0ω̂ ]]∇_q H(ω,q),with output y=∇ H (ω,q). Here K_d is 3× 3 diagonal and K_p is 3× 4.The energy balance equation readsrCld H/dt =∇_ω H^Tω̇+∇_q H^Tq̇ = ∇_ω H^T𝕀^-1u= -∇_ω H^T𝕀^-1K_d∇_ω H-∇_ω H^T𝕀^-1K_p∇_q H. We apply the method (<ref>)-(<ref>),and we obtainthe discrete energy balance equation rCl H(ω_N,q_N)-H(ω_0,q_0) = -h∑_n=0^N-1 ∇̅_ω H_n^T 𝕀^-1K_d ∇̅ _ωH_n - h∑_n=0^N-1 ∇̅_ω H_n^T 𝕀^-1K_p ∇̅_q H_n, with rCl∇̅_ω H_n :=∇̅_ω H((ω_n,q_n),(ω_n+1,q_n+1)),∇̅_q H_n :=∇̅_q H((ω_n,q_n),(ω_n+1,q_n+1)).In Fig. <ref> we plot (in red) separately the discrete external power A_ext (i.e. minus the right hand side of the discrete energy balance equation), and the difference in the Hamiltonian H(t_n) - H(t_0) in blue, (i.e. the left hand side of the discrete energy balance equation). We obtain the expected energy exchange. In Fig. <ref> we show that indeed the sum of these two energies is zero to machine precision. The inertia matrix is 𝕀=diag(1,2,3), K_d=diag(3,4,5), K_p=[ diag(3,5,6),1]. Here 1∈ℝ^3 is the vector with all components equal to 1. §.§ Controlled PendulumFor a second experiment consider the simple pendulum, with a small non-linear controller term for the momentum. The system has the format (<ref>) with x = [q,p]^T ∈ℝ^2 and rCl H(q,p)=1/2p^2 +1 - cosq, B(x)=J, G(x)=[0,1]^T, u=ϕ(y) = -0.01arctany.Using the theory from Theorem <ref>, one can show that this system will converge from almost every initial condition to the stable equilibrium p=0, q = n2π for some integer n. Note that all these values of q correspond to the same physical position. The system also has an unstable equilibrium at p=0, q = (2n+1)π.In Fig. <ref>, <ref>, and <ref> we compare the evolution of the position, the absolute error in the Hamiltonian, and the input u respectively for method (<ref>), the implicit midpoint method, the averaged vector field method, and the improved Euler method, all second order. The initial state is x(0) = [2.8,1.4]^T. We observe that the AVF method and the method (<ref>) (which is based on this), outperforms the implicit midpoint, not to mention the improved Euler method, for the choice of step size, h=0.5. In particular, in Fig. <ref>, the two latter methods give the wrong number of full rotations, n, of the pendulum before it starts to converge towards the stable equilibrium. Consequently the input signal as shown in Fig. <ref> is also qualitatively wrong for these methods. In contrast, both the AVF method and the method (<ref>) produce results which are difficult to distinguish from the exact solution on the scale shown. The AVF method and the method (<ref>) are seen to have comparable energy preservation in Fig. <ref>, which is superior to the implicit midpoint and the improved Euler method. §.§ Capacitor MicrophoneFor a system with dissipation, we consider the capacitor microphone from <cit.>, which can also be written on the format (<ref>) with x = [q,p,Q]^T ∈ℝ^3 and rCl H(x)=1/2mp^2+1/2(q-q̅)^2+1/2qQ^2, B(x)=[[010; -1 -c0;00 -1/R;] ], G(x)=[0,1,1/R]^T, u=-ϕ(y) = -1/2√(y).Here R = 100 is the resistance, and c = 0.1 the damping constant of the spring to which the right capacitor plate, with mass m=4, is attached. q̅ = 3 is the equilibrium point of the spring. In Fig. <ref> the evolution of the absolute error in the Hamiltonian for the dissipative system (<ref>) is compared for method (<ref>) and the improved Euler method, with step size h=0.5. Method (<ref>) is again seen to better capture the correct evolution of the energy when compared to improved Euler. Results for the implicit midpoint method and the averaged vector field method were here comparable to method (<ref>). § CONCLUSION We have presented a systematic way to design discrete port-Hamiltonian systems, starting from a continuous system, by applying discrete gradient methods and splitting methods. The obtained discrete port-Hamiltonian systems are passive, and can be globally stabilized with respect to an equilibrium point by an appropriate choice of the input of the discrete system. The obtained discrete systems are port-Hamiltonian in the sense that they preserve a discrete notion of passivity and a generalized Dirac structure. The methods derived using this approach showed promising results in numerical experiments. [Higher Order Discrete Gradient Methods] Generalisations of the second order method (<ref>) to higher order can be obtained using a collocation idea as in <cit.>. Let us denote the Lagrange basis function on the node c_j by ℓ_j for j = 1,…,s, write σ(t) for the collocation polynomial, and define X_τ:=σ(t_n+τ h) and X_j:=σ(t_n+c_j h). Now, consider the following collocation method to integrate (<ref>): rCl σ(t_n) = x_n, σ̇(t_n+c_jh) = B(X_j)∇̅ H_j+G(X_j)u_j, x_n+1 = σ(t_n+h),with rCl b_j:=∫_0^1 ℓ_j(α) dα,∇̅ H_j := ∫_0^1ℓ_j(α)/b_j ∇H(X_α)dα,y_j := G(X_j)^T∇̅ H_j,and where the discrete controls u_j depend on σ. Using Lagrange interpolation we can express the derivative of the collocation polynomial asσ̇(t_n+τ h)=∑_j=1^sℓ_j(τ)[B(X_j) ∇̅ H_j+G(X_j)u_j],and obtain X_τ=σ(t_n+τ h) by integration. We may define stage values for the output asy_j = G(X_j)^T∇̅ H_j.The collocation polynomial X_τ gives a natural continuous form of the numerical solution on the whole interval of integration. Along the approximated solutionX_τ, a polynomial of degree s, we will show that the numerical method preserves a discrete passivity property, for quadratures c_1,… ,c_swith positive weights b_1,… ,b_s.In fact we haveH(x_n+1)-H(x_n)=h∫_0^1∇ H(X_τ)^Tσ̇(t_n+τ h) dτ.Then after simple calculations we obtain rCl H(x_n+1)-H(x_n) = h∑_j=1^sb_j∇̅ H_j^TB(X_j)∇̅ H_j+ h∑_j=1^sb_j∇̅ H_j^TG(X_j)u_j,and upon using that B(· ) is skew-symmetric rCl H(x_n+1)-H(x_n) =h∑_j=1^sb_j∇̅ H_j^TG(X_j)u_j= h∑_j=1^sb_jy_j^Tu_j, which clearly satisfies the discrete energy balance equation (<ref>). The consequent passivity inequality becomesH(x_n+1)-H(x_n)≤ h∑_j=1^sb_jy_j^Tu_j.We now state and prove the following theorem Assume that the discrete port-Hamiltonian system (<ref>) defines a unique one step map x_n+1 = Ψ(x_n), and that the system is passive with radially unbounded positive definite storage function H. Furthermore assume that no numerical solution {x_n}_n ∈ℕ satisfying the system of equations rCl σ(t_n) = x_n, σ̇(t_n+c_jh) = B(X_j)∇̅ H_j, x_n+1 = σ(t_n+h), can simultaneously satisfy the requirement y_j = 0 for j = 1,…,s at every solution step, other than the trivial solution, i.e. x_n = 0 for n ∈ℕ. Then the origin x=0 can be globally stabilised with the choice of an appropriate control inputu_j=-ϕ(y_j) where ϕ is a function such that ϕ(0)=0 and y^Tϕ(y)>0 for all y≠ 0. From (<ref>) we have rCl ΔH(x_n)=H(Ψ(x_n))-H(x_n)=-h∑_j=1^sb_j∇̅H_j^TG(X_j)ϕ(G(X_j)^T∇̅H_j) ≤0,where the last inequality follows from the weights b_j being positive and the properties of ϕ. Note that this inequality holds termwise. Since H is continuous on ℝ^n it is a (discrete) Lyapunov function for (<ref>) on ℝ^n. In addition, because H is radially unbounded, it follows that all solutions of this discrete system are bounded.Consequently from Proposition <ref>, x_n → M ≠∅, where M is the largest positively invariant set contained in the set E = {x ∈ℝ^n: Δ H(x) = 0}. Thus if M = {0} the origin will be globally asymptotically stable.Now, from the above calculations Δ H(x_n) = 0 implies y_j = 0 for all j, which from the properties of ϕ implies that u_j = 0 for all j. Therefore (<ref>) reduces to (<ref>). Now from the zero-state observability requirement x_n = 0 for n ∈ℕ, and consequently M = {0}. IEEEtranElena Celledoni Biography will be included in the final published version, if the paper is accepted.Eirik Hoel Høiseth Biography will be included in the final published version, if the paper is accepted.
http://arxiv.org/abs/1706.08621v1
{ "authors": [ "Elena Celledoni", "Eirik Hoel Høiseth" ], "categories": [ "math.NA", "65P40" ], "primary_category": "math.NA", "published": "20170626225745", "title": "Energy-Preserving and Passivity-Consistent Numerical Discretization of Port-Hamiltonian Systems" }
Hybrid Photonic Loss Resilient Entanglement Swapping Ryan C. Parker^1, Jaewoo Joo^2,3, Mohsen Razavi^2, Timothy P. Spiller^1=========================================================================== § INTRODUCTION Accurate diagnosis of Alzheimer's Disease (AD) entails clinical evaluation of multiple cognition metrics and biomarkers. Metrics such as the Alzheimer's Disease Assessment Scale-Cognitive test (ADAS-cog) <cit.> comprise multiple subscores that quantify different aspects of a patient's cognitive state such as learning, memory, and language production/comprehension. Although computer-aided diagnostic techniques for classification of a patient's current disease state exist <cit.>, they provide little insight into the relationship between changes in brain structure and different aspects of a patient's cognitive state that occur over time in AD.§ METHODS We created a Convolutional Neural Network (CNN) architecture that maps an input tuple comprising a patient's current structural MRI (sMRI) scan and a future time (in number of months) to the values of the patient's 13 ADAS-Cog subscores predicted for that time. The architecture (Fig. <ref>) consists of * 3 convolutional layers, each comprising convolution, maximum pooling, and rectified linear activation;* 3 fully connected layers, each comprising N linear units and rectified linear activation, where N=6000,1000,500, respectively.The first convolutional layer's input consists of the specified sMRI scan; all outputs of the third convolutional layer and the input prediction time are fed to all units in the first fully connected layer. Weights in all convolutional layers were initialized as described in <cit.>. To prevent overfitting, dropout <cit.> with a probability of 0.5 was performed after max pooling in every convolutional layer during training. Optimization was performed using the RMSProp <cit.> algorithm. A smooth L_1 loss function was employed to reduce sensitivity to outliers <cit.>. The architecture was implemented in PyTorch <cit.> and trained on Ubuntu Linux 16.04 with NVIDIA Tesla M40 Graphics Processing Units (GPUs).To evaluate the architecture, we performed 5-fold cross-validation with training and testing data drawn from a set of tuples of sMRI scans, cognitive scores, and time intervals between MRI and cognitive score acquisition. Intervals were multiples of 6 months between 0 and 36. Each fold was stratified to contain equal numbers of subscores for each unique interval. To ensure that each fold contained a sufficient number of tuples for each interval, the folds were drawn from a dataset comprising 1000 entries. This dataset contained multiple images for some patients, although the number of images obtained from an individual patient across the dataset was not fixed owing to the size of the parent ADNI dataset from which the images were drawn.All sMRI and cognitive score data used for evaluation of the architecture were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (<http://adni.loni.usc.edu>). The primary goal of ADNI has been to test whether serial MRI, PET, other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer’s disease (AD). Only 1.5T MRI scans from a quality-controlled subset of the ADNI1 phase of the project were utilized <cit.>. sMRI scans were minimally preprocessed to remove non-brain tissue, normalize intensities, and register all scans to a single coordinate space. All subscores were normalized to the range [0,1] prior to training and testing.§ RESULTS We computed the root mean squared error (RMSE) for each of the 13 subscores for each interval (Fig. <ref>). The means and standard errors across all intervals and subscores varied from 0.009 to 0.251 and from 0.001 to 0.046, respectively. To compare the performance of our network with other approaches that only predict the aggregate score, we also computed the RMSE and Pearson's correlation coefficient of the predicted/actual aggregate score normalized to the range [0,1] (Fig. <ref>). We obtained the latter by rescaling the normalized subscores to their original respective ranges and normalizing the sum using the highest possible aggregate score. The means and standard errors of the RMSEs of our method's normalized aggregate score prediction for the intervals considered varied from 0.058 to 0.087 and from 0.014 to 0.028, respectively. The means and standard errors of the correlation between predicted and actual aggregate scores across all intervals varied from 0.643 to 0.862 and from 0.040 to 0.149, respectively.§ DISCUSSION To our knowledge, our architecture is the first to concurrently predict multiple cognitive examination subscores from minimally preprocessed structural brain data. Variations in prediction accuracy across the subscores illustrate differences in the relationships between brain structure and specific aspects of cognition that are obscured by prediction of the aggregate score. The mean performance of our architecture when applied to aggregate score prediction is similar to that of existing techniques that variously utilize stepwise regression <cit.>, relevance vector regression <cit.>, multi-task learning <cit.>, and support vector machines <cit.>. Since the quality of features extracted by CNNs is proportional to the amount of training data and number of network layers <cit.>, we anticipate that deeper variations of our architecture trained on more extensive subsets of the ADNI database will be able to achieve state-of-the-art performance.§ CONCLUSION We have developed a CNN architecture that can predict the trajectories of the 13 subscores comprised by a subject's ADAS-cog examination results from a current sMRI scan up to 36 months from image acquisition time without resorting to manual feature extraction. Mean performance metrics are within range of those of existing techniques that require manual feature selection and are limited to predicting aggregate scores.§ KEYWORDS Alzheimer's Disease, deep learning, convolutional neural network, cognitive assessment, machine learning, MRI§ ACKNOWLEDGMENTS Data collection and sharing for this project was funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI, principal investigator: Michael Weiner) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, and the National Institute of Biomedical Imaging and Bioengineering. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California. plain
http://arxiv.org/abs/1706.08491v2
{ "authors": [ "Lev E. Givon", "Laura J. Mariano", "David O'Dowd", "John M. Irvine", "Abraham R. Schneider" ], "categories": [ "stat.ML", "cs.LG" ], "primary_category": "stat.ML", "published": "20170626172942", "title": "Cognitive Subscore Trajectory Prediction in Alzheimer's Disease" }
[email protected] of Optics, Palacký University, 17. listopadu 1192/12, 77146 Olomouc, Czech RepublicDepartment of Optics, Palacký University, 17. listopadu 1192/12, 77146 Olomouc, Czech Republic Quantum nonlinear operations for harmonic oscillator systems play a key role in the development of analog quantum simulators and computers. Since a variety of strong highly nonlinear operations are unavailable in the existing physical systems, it is a common practice to approximate them by using conditional measurement-induced methods. The conditional approach has several drawbacks, the most severe of which is the exponentially decreasing success rate of the strong and complex nonlinear operations. We show that by using a suitable two level system sequentially interacting with the oscillator, it is possible to resolve these issues and implement a nonlinear operation both nearly deterministically and nearly perfectly. We explicitly demonstrate the approach by constructing self-Kerr and cross-Kerr couplings in a realistic situation, which require a feasible dispersive coupling between the two-level system and the oscillator.Deterministic nonlinear gates with oscillators mediated by a qubit Radim Filip December 30, 2023 ==================================================================empty§ INTRODUCTIONQuantum computers or quantum Turing machines <cit.> take advantage of their quantum mechanical architecture and are capable of solving tasks which are exponentially hard for their classical counterparts <cit.>. Their predecessors are quantum simulators <cit.>, which seek to emulate specific quantum dynamics of particular quantum systems in place of general processing. The fundamental principle of the simulations relies on mapping the complex quantum systems onto other more accessible and better controllable ones, such as trapped ions <cit.>, photons <cit.>, atomic lattices <cit.> and superconducting circuit <cit.>. The analog simulators are dedicated to continuous variables (CV) systems with infinite dimensional Hilbert space <cit.>. These systems allow for simulations of unexplored highly nonlinear open quantum dynamics <cit.>. Some CV nonlinear operations naturally appear in other physical systems, such as Bose-Einstein condensates <cit.>, cold ions <cit.>, or circuit quantum electrodynamics <cit.>. The spectrum of nonlinear operations is however limited and typically determined by the unique physics of specific experimental platforms.A broader set of nonlinear operations for quantum harmonic oscillator can be elegantly realized by coupling them to suitable two-level systems (qubits) <cit.>. This realization is possible because the two-level systems are naturally nonlinear due to their saturability and offer a wide variety of qubit-oscillator couplings. The nonlinear nature in turn leads to dynamics of the oscillator which can be used for deterministic generation of nonclassical states <cit.> or for conditional realization of nonlinear quantum potentials <cit.>. The two level systems are also beneficial from a technical standpoint, allowing for a significantly larger number of individual interactions <cit.> than what is allowed for purely optical ancillary single photon states <cit.>. The conditional nature of these hybrid operations, however, limits them in their suitability for practical applications as well as quantum simulations, which ultimately leads to success rate exponentially decreasing with the number of operations involved.In this report we propose a method for deterministic implementation of nonlinear unitary operations for quantum harmonic oscillators sequentially coupled to single qubits. This method relies on employing a sequence of available non-commuting qubit-oscillator interactions, similarly as in <cit.>. The qubits act only as mediators rather than for control unlike the conceptually similar quantum Zeno gates <cit.>, starting and finishing the operation in a factorized state. The repeated gates incrementally create a Zeno-like nonlinear unitary dynamics deterministically and with a nearly unit fidelity. We illustrate the quality of the proposed method by explicitly analyzing realization of the self-Kerr and cross-Kerr nonlinearities done with help of a qubit sequentially coupled to the oscillator by dispersive interactions <cit.> under photon losses.§ SHORT-TIME OSCILLATOR INTERACTION TRANSDUCED BY A QUBITLet us start by considering a short time evolution of a quantum oscillator mediated by a single qubit. The unitary oscillator-qubit interaction that enables the desired dynamics is governed by Hamiltonians of the type H_Â=ħσ̂_jÂ, where σ̂_j with j = x,y,z relates to the qubit system and stands for one of Pauli matrices, and  is an operator acting on the oscillator. We assume resonant cases when free evolution Hamiltonians can be eliminated. To achieve the desired gate on the oscillator, we can consider a pair of non-commuting unitary operators Û_x=exp[iτσ̂_x Â] and Û_y=exp[iτσ̂_y B̂] where the oscillator operators  and B̂ commute [Â,B̂] = 0. As depicted in Fig. <ref>a, we can join them into a sequence Û_xyxy=Û_xÛ_yÛ_x^†Û_y^† following the idea of geometric phase effect <cit.>. In a manner similar to <cit.>, this operator can be simplified toÛ_xyxy=exp[iτσ̂_x Â]exp[iτσ̂_y B̂]exp[-iτσ̂_x Â]exp[-iτσ̂_y B̂]=1-2sin^2[τÂ]sin^2[τB̂]+isin[2τÂ]sin^2[τB̂]σ̂_x-isin^2[τÂ]sin[2τB̂]σ̂_y-i/2sin[2τÂ]sin[2τB̂]σ̂_z≈exp[-2iτ^2σ̂_z ÂB̂]≡Û_σ̂_z ÂB̂,where the last line corresponds to a weak strength limit τ≪ 1 <cit.>. The resulting oscillator dynamics is driven by the product of operators ÂB̂ and coupled to the qubit by σ̂_z. The qubit degree of freedom can bestraightforwardly eliminated by preparing and measuring the qubit system in one of the relevant eigenstates, such as |g⟩. The measurement then substitutes the discarding of qubit depicted in Fig. <ref>a. The whole sequence ⟨g|Û_xyxy|g⟩ then realizes a conditional operatorÔ_1=⟨g|Û_xyxy|g⟩=1-2sin^2[τÂ]sin^2[τB̂]+isin[2τÂ]sin[2τB̂]/2,which approximates unitary operation U_ÂB̂=exp[-2iτ^2ÂB̂]in the limit of small τ. The commutativity of  and B̂ restricts the generality of the scheme, but still allows for many interesting cases. The base operators  and B̂ can be compatible operators on a single oscillator (as in Fig.<ref>a), or different operations on two separate oscillators (illustrated in Fig.<ref>b). The most apparent scenarios in which the product of two operators is highly nontrivial and practically useful operation are the self-Kerr and cross-Kerr evolutions, which we will address in detail later.§ NEAR-UNITARITY OF SHORT-TIME REALISTIC INTERACTION The perfect operation (<ref>) is realized only in the limit of short time τ→ 0. However, we can increase the strength by repeating the individual operations. In each step, the ancillary qubit is initialized in the ground state, led to interact with the oscillator systems, and finally projected onto the ground state again. It does not matter whethera single physical qubit is used repetitively or if a number of different systems is employed. In any case, R repetitions realize quantum operation Ô_R = (Ô_1)^R which approximates the ideal operation Ô_T≡ e^-2iRτ^2 ÂB̂. Interestingly enough, in the limit of sufficiently small τ the re-initialization of qubit is not needed, as the approximate operator can be also obtained as Ô_R = ⟨g|(Û_xyxy)^R|g⟩.For a specific test state |ψ⟩, the performance of the operation can be quantified by looking at its successful implementation probability P_s=⟨ψ|Ô_R^†Ô_R|ψ⟩and fidelity F=|⟨ψ|Ô_T^†Ô_R|ψ⟩|^2/P_s. These metrics inherently depend on the chosen state |ψ⟩, but we can also directly analyze the sandwiched operators Q̂_f=Ô_T^†Ô_R and Q̂_s=Ô_R^†Ô_R. In the ideal case of Ô_R=Ô_T, both of these operators Q̂_s and Q̂_f reduce to the identity operator 1̂. We can therefore discern the quality of the operation by looking at how far we are from this ideal scenario. This analysis is best accomplished by considering the joint eigenbasis of the commuting operators  and B̂ consisting of states |m⟩ with the respective eigenvalues m_A and m_B. Note that the basis does not need to be discrete. We can write the diagonal elements of Q̂_f and Q̂_s as⟨m|Q̂_s|m⟩=|⟨m|Q̂_f|m⟩|^2,where the unitarity of the operator Ô_T is utilized. We can notice an interesting behavior: the fidelity and the success probability are not complementary and can approach unity simultaneously. In the limit of small τ, the probability of success is quantified as⟨m|Q̂_s|m⟩≈ 1-4 m_A^2 m_B^2(m_A^2+m_B^2) R τ ^6,which shows the exact boundaries in the Hilbert space which supports the operation with a sufficient quality. Specifically, an approximate operation with conditional fidelity F_c ⪆ 1-ϵ and success probability P_s ⪆1-ϵ, where ϵ≪ 1, can be realized for states fully contained in Hilbert space for which m_max^6 < ϵ/(8 R τ^6), where m_max = max(|m_A|,|m_B|). We can also rewrite the conditions in terms of the fixed total interaction strength T = 2Rτ^2 as:m_max^6 < ϵ R^2/T^3,which tells us that large number of repetitions R can enlarge the available support of the operation. It should also be noted that the operators  and B̂ typically represent position, momentum, or number of quanta of the oscillators whose statistical distribution are asymptotically vanishing outside a certain range, and therefore are reasonably bounded in realistic physical systems. The prominent aspect of our scheme is that its success probability can approach one even for many repetitions, implying that the measurement can be removed from the setup. We therefore follow the deterministic scheme depicted in Fig. <ref>. Formally, a single step of the operation is no longer represented by an operator Ô_1, but by a trace preserving map which deterministically transforms any input state ρ̂_in intoρ̂_out=Tr_q[Û_xyxy{|g⟩_q⟨g|⊗ρ̂_in}Û_xyxy^†]=Ô_1 ρ̂_inÔ_1^†+Ô_2 ρ̂_inÔ_2^†,whereÔ_1=⟨g|Û_xyxy|g⟩=1-2sin^2[τÂ]sin^2[τB̂]+isin[2τÂ]sin[2τB̂]/2 is the successful operation and Ô_2=⟨e|Û_xyxy|g⟩=-sin^2[τÂ]sin[2τB̂]+isin[2τÂ]sin^2[τB̂] is the erroneous operation. When the individual operation is repeated R times, the final output state can be expressed asρ̂_out = P_s Ô_R ρ̂_inÔ_R^† + (1-P_s) ρ̂_error^R,where P_s denotes the success probability of the probabilistic scheme with otherwise identical parameters and the density matrix ρ̂_error^R groups together all the realizations which would be in the probabilistic scenario disqualified by measurements. For states from Hilbert space limited by (<ref>) the fidelity is lower bounded by F≥ P_s F_c ≈ 1-2 ϵ. This result shows that the performance of the deterministic scheme is comparable to the probabilistic regime. Considering (<ref>) and the respective fidelities, the deterministic scheme achieves the performance of the probabilistic one when the number of repetitions R is increased by a factor of √(2). § EXAMPLE OF SELF-KERR QUANTUM INTERACTION Let us explicitly demonstrate the performance of the proposed gate by realizing some of the nonlinear gates prevalent in quantum information theory and quantum technology. The self-Kerr operation <cit.> is realized by a unitary operator exp(iT n̂^2) and in our approach it can be straightforwardly achieved by settingÂ=B̂=n̂, where n̂=â^†â is the number operator for harmonic oscillator.The implementation requires coupling with Hamiltonian H ∝n̂σ̂_j, where σ_j are Pauli matrices. It can be obtained as part of the dispersive interaction available between two-level systems and oscillators in cavity field and membrane <cit.>, atoms <cit.>, circuit QED <cit.> and superconducting systems <cit.>.In contrast to the approach of circuit QED <cit.>, which employs suitable time-dependent driving of the qubit-oscillator, our method employs a set of identical elementary gates, which can be repeated in order to obtain strong interaction. As a consequence, the whole operation is less demanding from the point of view of the ability to control the employed quantum systems.The performance of the gate can be generally estimated from the parameters and from the available dimension given by (<ref>). However, such a bound may be too loose, and actual performance depends on the specific choice of the states. Let us apply the self-Kerr operation to a sample coherent state |β⟩=exp[βâ^†-β^*â]|0⟩ with β = 1. The self-Kerr operation is non-classical and non-Gaussian operation, and produces a non-classical and non-Gaussian state when applied to a coherent state <cit.>. Such states are necessary for advanced application of quantum information processing such as quantum computation <cit.>, and can be recognized by negative regions of their Wigner functions <cit.>. In relation to the self-Kerr effect a larger Kerr interaction strength T produces more complex structures of negative Wigner function <cit.>.In Fig. <ref>, we display the negative regions of Wigner function of self-Kerr transformed coherent states with various coupling parameters T=0.2,0.4,0.6,0.8. Apparently, a birth of highly nonclassical quantum interference in phase space can be observed. It is manifested by three separated regions of negativity. The figures show practically no difference between the ideal operation (above) and the deterministic approximate realization with τ=0.02 (middle). This observation is reinforced by a near unit fidelity F = 1- 0.8 × 10^-4 for T=0.8. Interestingly, based on (<ref>) and the parameters of the operation, the maximal Fock number allowing such high value of fidelity would be n_max = 0. As only around one third of the considered coherent state lives in that subspace, this tells us that for practical states the conditions for successful approximation might be even more relaxed. For example, for large coherent states with |β|^2 ≫ 1, the fidelity of the deterministic approximative scheme scales as F≈ 1 - 9 T^3 |β|^10/R^2,derived for the lowest order expansion in T of the fidelity.In realistic scenarios, the operation will have to endure the effects of imperfections, mainly the loss which is the dominant decoherence model for quantum oscillators. The loss can be modeled by passively coupling the evolving system to a set of zero temperature oscillators. In our model, we consider a sequence of discrete couplings, one after each cycle of the elementary sequence (<ref>). Each of these couplings transforms annihilation operator of the system as â→√(η)â + √(1-η)â_bath, where â_bath is annihilation operator of the auxiliary zero temperature oscillator which is immediately discarded. The single step transmittance parameter η strongly impacts the performance of the method. To see how, we have simulated the realistic operation for η=1-5.6× 10^-4. The loss counteracts the effects of the nonlinear operation. As time of the interaction increases, the state is continuously becoming more and more non-classical, which is witnessed by appearance of negative areas in its Wigner function. However, the loss is accumulated with time and at some point so much of the energy is lost that the non-classical features vanish. This can be seen in the bottom row of Fig. <ref>. We can see that while the loss of 13% of the energy for T=0.2 did not severely affect the non-classicality, 40 % loss for T=0.8 already removed one area of negativity. We therefore conclude that proposed method is not critically sensitive to basic decoherence caused by a loss in the oscillator. § EXAMPLE OF CROSS-KERR QUANTUM INTERACTIONAnother example of quantum nonlinear interactions is the cross-Kerr coupling between two harmonic oscillators. This gate is a key component in building important two-qubit single photon gates in linear optical quantum computation such as controlled NOT gates and Fredkin gates <cit.>, and nondestructive photon detection <cit.>. It also enables direct photon-photon interaction used for many quantum information processing such as a one-way computation <cit.>. The cross-Kerr interaction, represented by a unitary operator exp[iTn̂_1n̂_2], can be engineered from the same fundamental component as the self-Kerr operation: the dispersive coupling between an oscillator and a qubit, only this time the qubit is coupled to two separate oscillators (as in Fig. 1b) so Â=n̂_1 and B̂=n̂_2.An elementary application is altering phase of a single photon based on the presence or absence of another, which is the basis for many discrete computation gates <cit.>. In an example of the control-Z gate <cit.>, a separable state of two oscillators |00⟩ + |01⟩ + |10⟩ + |11⟩ is changed to entangled state |00⟩ + |01⟩ + |10⟩ - |11⟩ by the cross-Kerr gate with a strength T=π. Within our approach, the deterministic cross-Kerr gate with fidelity F = 1-10^-5 can be achieved from R=1000 instances of the basic block. This scenario suits the approximation well due to a limited number of photons in the systems. However, there are other applications in which larger photon numbers are significant <cit.>. To test for this scenario, we consider the cross-Kerr coupling between two coherent states with amplitudes α = β = 1. Considering again interaction strength T = π, the operation can be implemented with fidelity F=0.989 for R=1000 and F = 1-5× 10^-4 with R = 2500 repetitions. A higher number of individual operations is demanded by the larger Hilbert space of the states for a fidelity comparable with the previous example. We can also analyze the operation from the point of view of entanglement it generates. There are several measures of entanglement <cit.>, and here we adopt the negativity due to the ease of its evaluation <cit.>. The negativity of a bipartite state given by a density operator ρ can be obtained as N[ρ]=Tr[|ρ^PT|]-1/2 as the measure of entanglement, where ρ^PT is the partial transposed density matrix and Tr[|·|] is the trace norm. The analysis should also clearly show that the cross-Kerr gate is non-Gaussian and the created entanglement should therefore be of the non-Gaussian nature. To that end we also look at the Gaussian negativity N_G[ρ]=Tr[|ρ_G^PT|]-1/2, where ρ_G is the density matrix of a Gaussian state which has all first and second moments of quadrature operators identical with ρ <cit.>. Both the Gaussian and the non-Gaussian entanglement of the state generated by the cross-Kerr gate are plotted in Fig. <ref> for various values of the interaction strength T. The interaction strength of dispersive interactions was chosen as τ=0.05. We can see that the entanglement created for larger values of T is practically completely non-Gaussian, as expected, and that the simulated process closely follows the ideal scenario.To assess an impact of the decoherence on the cross Kerr interaction, we introduce an equal loss in the both oscillators.Simulations with a realistic loss with η=1-3.5× 10^-3, corresponding to the same level of noise as in a previous section, show results conceptually similar to the self-Kerr case. Again, the loss limits the achievable number of elementary gates and the corresponding total interaction strength. State with dominantly non-Gaussian entanglement can be still achieved, but the maximal difference between non-Gaussian and Gaussian entanglement is limited. For our simulation, this differencemax_ρ{N[ρ]-N_G[ρ]} was 0.31 at the energy loss of about 40% for a single arm.There is, however, another interesting effect. In addition to reducing the overall correlations, the loss also drives the quantum state towards Gaussianity. As a consequence, there is less of entanglement, but higher portion of it is Gaussian. In fact, for certain values of parameters the lossy scenario produces more Gaussian entanglement than the ideal one, while non-Gaussian nature is still accessible. It supports previous statements about a sufficient robustness of the method to the loss in oscillator. § APPLICATIONS AND OUTLOOK In summary, using a single qubit as a recyclable mediator allows for synthesis of high order nonlinear operations on quantum oscillators. These operations can be realized at an arbitrary strength with both fidelity and probability of success approaching one. The only cost is represented by the required number of repetitions of the basic building block, which may be mitigated by using an optimized architecture. Operations which can be implemented depend on the available qubit-oscillator couplings. With the feasible dispersive coupling <cit.> it is possible to realize self-Kerr and cross-Kerr operations, which play a significant role in quantum information processing, with high quality under a moderate level of environmental effects. The extension of the scheme ranges from engineering high order quadrature nonlinear operators, such as cubic-phase gate operator by Rabi interactions <cit.>, to hybrid interaction operator such as principally nonlinear optomechanical interactions <cit.> by combination of the dispersive and Rabi interactions. The higher-order versions of both dispersive and Rabi interactions open a broad class of CV nonlinear interactions.The involved harmonic oscillators can be physically varied (optical, mechanical, electrical, collective spins), and therefore this method can potentially provide wide class of nonlinear gates between these platforms. All of these potential applications open up a possibility of deterministic quantum simulators.99 quantumcomputer P. Benioff , J. Stat. Phys. 22, 563 (1980); D. Deutsch, P. Roy. Soc. Lond. A. 400, 97 (1985). algorithm P. W. Shor, SIAM J. Comput., 26, 1484 (1997); D. R. Simon, Foundations of Computer Science, 1994 Proceedings., 35th Annual Symposium on: 116 (1994); D. Deutsch and R. Jozsa, P. Roy. Soc. Lond. A. 439, 553 (1992). simulators V. M. Kendon, K. Nemoto, and W. J. Munro, Philos. Trans. R. Soc. Lond. A 368, 3609 (2010); J. Cirac and P. Zoller, Nat. Phys. 8, 246 (2010); I.M. Georgescu, S. Ashhab, and F. Nori, Rev. Mod. Phys. 86, 153 (2014). Feynman1982 R. Feynman, Int. J.Theor. Phys. 21, 467 (1982).simulation_ions K. Kim, M. S. Chang, S. Korenblit, R. Islam, E. E. Edwards, J. K. Freericks, G. D. Lin, L. M. Duan, and C. Monroe, Nature, 465,590 (2010); R. Gerritsma , G. Kirchmair, F. Zhringer, E. Solano, R. Blatt, and C. F. Roos, Nature 463, 68 (2010); B. P. Lanyon, C. Hempel, D. Nigg, M. Mller, R. Gerritsma, F. Zhringer, P. Schindler, J. T. Barreiro, M. Rambach, G. Kirchmair, M. Hennrich, P. Zoller, R. Blatt, and C. F. Roos, Science 334, 57 (2011); R. Blatt and C. F. Roos, Nat. Phys. 8, 277 (2012).simulation_photons A. Peruzzo , M. Lobino, J. C. Matthews, N. Matsuda, A. Politi, K. Poulios , X.-Q. Zhou, Y. Lahini, N. Ismail, K. Wörhoff, Y. Bromberg, Y. Silberberg, M. G. Thompson, J. L. O'Brien, Science 329, 1500 (2010); A. Aspuru-Guzik and P. Walther, Nat. Phys. 8, 285 (2012). atomiclattice J. Simon, W. S. Bakr, R. Ma, M. E. Tai, P. M. Preiss and M. Greiner, Nature 472, 307 (2011); I. Bloch, J. Dalibard, S. Nascimbéne, Nat. Phys. 8, 267 (2012). superconducting A. A. Houck, H. E. Türeci and J. Koch, Nat. Phys. 8, 292 (2012); M. H. Devoret, R. J. Schoelkopf,339, 1169-1174 (2013). CVsystems S. L. Braunstein and P. van Loock, Rev. Mod. Phys. 77, 513 (2005). FilipPRA2005MeasCVinter R. Filip, P. Marek, and U. L. Andersen, Phys. Rev. A 71, 042308 (2005). MiwaPRL2014 Y. Miwa, J.-i. Yoshikawa, N. Iwata, M. Endo, P. Marek, R. Filip, P. van Loock, and A. Furusawa, Phys. Rev. Lett. 113, 013601 (2014).LloydPRL1999 S. Lloyd and S. L. Braunstein, Phys. Rev. Lett. 82, 1784 (1999). SpillerNJP2006 T. P. Spiller,K. Nemoto, S. L. Braunstein, W. J. Munro, P. van Loock, and G. J. Milburn, New J. Phys. 8, 30 (2006). GKP D. Gottesman, A. Kitaev, and J. Preskill, Phys. Rev. A 64, 012310 (2001). MarekPRA2011Cubic P. Marek, R. Filip, and A. Furusawa, Phys. Rev. A 84, 053802 (2011). MiyataPRA2016 K. Miyata, H. Ogawa, P. Marek, R. Filip, H. Yonezawa, J.-i. Yoshikawa, and A. Furusawa, Phys. Rev. A 93, 022301 (2016). SefiPRL2011 S. Sefi and P. van Loock, Phy. Rev. Lett. 107, 170501 (2011). SefiPRA2013 S. Sefi, V. Vaibhav, and P. van Loock, Phys. Rev. A 88, 012303 (2013). YukawaPRA2013 M. Yukawa, K. Miyata, H. Yonezawa, P. Marek, R. Filip, and A. Furusawa, Phys. Rev. A 88, 053816 (2013). BEC M. Greiner, O. Mandel, T. W. Hnsch, and I. Bloch, Nature 419, 51 (2002).cold_ions C. F. Roos, T. Monz, K. Kim, M. Riebe, H. Häffner, D. F. V. James, and R. Blatt, Phys. Rev. A 77, 040302(R) (2008). KirchmairNat2013Kerr G. Kirchmair, B. Vlastakis, Z. Leghtas, S. E. Nigg, H. Paik, E. Ginossar, M. Mirrahimi, L. Frunzio, S. M. Girvin, and, R. J. Schoelkopf, Nature 495, 205 (2013). LeibfriedRMP2003Trappedions D. Leibfried, R. Blatt, C. Monroe, and D. Wineland, Rev. Mod. Phys. 75, 281 (2003). XiangRMP2013hybridsuperconducting Z.-L. Xiang, S. Ashhab, J. Q. You, and F. Nori, Rev. Mod. Phys. 85, 623 (2013). AspelmeyerRMP2014CavityOptomechanics M. Aspelmeyer, T. J. Kippenberg, and F. Marquardt, Rev. Mod. Phys. 86, 1391 (2014). ReisererRMP2015CavityNetwork A. Reiserer and G. Rempe, Rev. Mod. Phys. 87, 1379 (2015). LodahlRMP2015PhotonicNanostructures P. Lodahl, S. Mahmoodian, and S. Stobbe, Rev. Mod. Phys. 87, 347 (2015). NCstates P. Marek, L. Lachman, L. Slodička, and R. Filip, Phys. Rev. A 94, 013850 (2016). ParkPRA2016JC K. Park, P. Marek, and R. Filip, Phys. Rev. A 94, 012332 (2016). Park2016Rabi K. Park, P. Marek, and R. Filip, Phys. Rev. A 94, 062308 (2016). SayrinNat2011atom C. Sayrin, I. Dotsenko, X. Zhou, B. Peaudecerf, T. Rybarczyk, S. Gleyzes, P. Rouchon, M. Mirrahimi, H. Amini, M. Brune, and J.-M. Raimond, and S. Haroche, Nature 477, 73 (2011). FiurasekPRA2009 J. Fiurášek, Phys. Rev. A 80, 053822 (2009). ParkPRA2014Xgate K. Park, P. Marek, and R. Filip, Phys. Rev. A 90, 013804 (2014). LloydARXIV2000Hybrid S. Lloyd, e-print arXiv: quant-ph/0008057 (2000). HuangPRA2008 Y. P. Huang and M. G. Moore, Phys. Rev. A 77, 062332 (2008). GleyzesNat2007dispersiveatom S. Gleyzes, S. Kuhr, C. Guerlin, J. Bernu, S. Deléglise, U. B. Hoff, M. Brune, J.-M. Raimond and S. Haroche, Nature 446, 297 (2007). GuerlinNat2007Dispersiveatom C. Guerlin, J. Bernu, S. Deléglise, C. Sayrin, S. Gleyzes, S. Kuhr, M. Brune, J.-M. Raimond and S. Haroche, Nature 448, 889 (2007). SchusterNat2007Superconducting D. I. Schuster, A. A. Houck, J. A. Schreier, A. Wallraff, J. M. Gambetta, A. Blais, L. Frunzio, J. Majer, B. Johnson, M. H. Devoret, S. M. Girvin and R. J. Schoelkopf, Nature 445, 515 (2007). ThompsonNat2008mechanical J. D. Thompson, B. M. Zwickl, A. M. Jayich, Florian Marquardt, S. M. Girvin and J. G. E. Harris, Nature 452, 72 (2008). BlaisPRA2004cQEDsupcond A. Blais, R.-S. Huang, A. Wallraff, S. M. Girvin, and R. J. Schoelkopf, Phys. Rev. A 69, 062320 (2004). JohnsonNatPhys2010QND B. R. Johnson, M. D. Reed, A. A. Houck, D. I. Schuster, Lev S. Bishop, E. Ginossar, J. M. Gambetta, L. DiCarlo, L. Frunzio, S. M. Girvin and R. J. Schoelkopf, Nat. Phys. 6, 663 (2010).berry M. V. Berry, Proc. Roy. Soc. A, 392, 45 (1984); Y. Aharonov, J. Anandan, Phys. Rev. Lett. 58, 1593 (1987). Loock2010 P. van Loock, Laser Photonics Rev. 5, 167 (2010). suppThe details of derivation are presented in Supplementary material.TurchettePRL1995ConditionalPhase Q. A. Turchette, C. J. Hood, W. Lange, H. Mabuchi, and H. J. Kimble, Phys. Rev. Lett. 75, 4710 (1995). parameterCQED S. Krastanov, V. V. Albert, C. Shen, C.-L. Zou, R. W. Heeres, B. Vlastakis, R. J. Schoelkopf, and L. Jiang, Phys. Rev. A 92, 040303 (R) (2015). BocaPRL2004Dispersiveatom A. Boca, R. Miller, K. M. Birnbaum, A. D. Boozer, J. McKeever, and H. J. Kimble, Phys. Rev. Lett. 93, 233603 (2004). cohstateD. Gottesman, A. Kitaev, and J. Preskill, Phys. Rev. A 64, 012310 (2001); F. Grosshans, and P. Grangier, Phys. Rev. Lett. 88, 057902 (2002). nonclassicality C. Weedbrook, S. Pirandola, R. García-Patrón, N. J. Cerf, T. C. Ralph, J. H. Shapiro, and S. Lloyd, Rev. Mod. Phys. 84, 621 (2012); X.-B. Wang, T. Hiroshima, A. Tomita, and M. Hayashi, Phys. Rep. 448, 1 (2007). Qcompcondition H. Jeong and M. S. Kim, Phys. Rev. A 65, 042305 (2002);T.C. Ralph , A. Gilchrist, G.J. Milburn , W.J. Munro, and S. Glancy, Phys. Rev. A 68, 042319 (2003). KokPRA2008 P. Kok, Phys. Rev. A 77, 013808 (2008). StobinskaPRA2008WignerSelfKerr M. Stobińska, G. J. Milburn, and K. Wódkiewicz, Phys. Rev. A 78, 013810 (2008). NemotoPRL2004CNOT K. Nemoto and W. J. Munro, Phys. Rev. Lett. 93, 250502 (2004). MilburnPRL1989Fredkin G. J. Milburn, Phys. Rev. Lett. 62, 2124 (1989). ChuangPRA1995Controlledgate I. L. Chuang and Y. Yamamoto Phys. Rev. A 52, 3489 (1995). ImotoPRA1985 N. Imoto, H. A. Haus, and Y. Yamamoto, Phys. Rev. A 32, 2287 (1985). MunroPRA2005QNDPNR W. J. Munro, Kae Nemoto, R. G. Beausoleil, and T. P. Spiller, Phys. Rev. A 71, 033819 (2005). HutchinsonJMO2004KerrOneway G. D. Hutchinson, and G. J. Milburn, J. Mod. Opt. 51, 1211 (2004). KokRMP2007Linear P. Kok, W. J. Munro, K. Nemoto, T. C. Ralph, J. P. Dowling, and G. J. Milburn, Rev. Mod. Phys. 79, 135 (2007). entanglement W. K. Wootters, Phys. Rev. Lett. 80, 2245 (1998); P. M. Hayden, M. Horodecki, and B. M. Terhal, J. Phys. A: Math. Gen. 34, 6891 (2001). Vidal G. Vidal and R. F. Werner, Phys. Rev. A 65, 032314 (2002). Laurat2005Gaussian J. Laurat, G. Keller, J. A. Oliveira-Huguenin, C. Fabre, T. Coudreau, A. Serafini, G. Adesso and F. Illuminati, J. Opt. B: Quantum Semiclassical Opt. 7, S577 (2005).Rabi2 J. Casanova, G. Romero, I. Lizuain, J. J. García-Ripoll, and E. Solano, Phys. Rev. Lett. 105, 263603 (2010); S. De Liberato, ibid. 112, 016401 (2014). Rabi3 A. Mezzacapo, U. Las Heras, J. S. Pedernales, L. DiCarlo, E. Solano, and L. Lamata, Sci. Rep. 4, 7482 (2014); D. Kienzler, C. Fluhmann, V. Negnevitsky, H.-Y. Lo, M. Marinelli, D. Nadlinger, and J. P. Home, Phys. Rev. Lett. 116, 140402 (2016). Rabi4 M. A. Sillanpää, J. I. Park, and R. W. Simmonds, Nature 449, 438 (2007); G. Gunter, A. A. Anappara, J. Hees, A. Sell, G. Biasiol, L. Sorba, S. De Liberato, C. Ciuti, A. Tredicucci, A. Leitenstorfer, and R. Huber, ibid. 458, 178 (2009); T. Niemczyk, F. Deppe, H. Huebl, E. P. Menzel, F. Hocke, M. J. Schwarz, J. J. Garcia-Ripoll, D. Zueco, T. Hummer, E. Solano, A. Marx, and R. Gross, Nat. Phys. 6, 772 (2010); P. Forn-Díaz, J. Lisenfeld, D. Marcos, J. J. García-Ripoll, E. Solano, C. J. P. M. Harmans, and J. E. Mooij,Phys. Rev. Lett. 105, 237001 (2010); A. Baust, E. Hoffmann, M. Haeberlein, M. J. Schwarz, P. Eder, J. Goetz, F. Wulschner, E. Xie, L. Zhong, F. Quijandría, D. Zueco, J.-J. García Ripoll, L. García-Alvarez, G. Romero, E. Solano, K. G. Fedorov, E. P. Menzel, F. Deppe, A. Marx, and R. Gross, Phys. Rev. B 93, 214501 (2016).Optomechanics1 M. Aspelmeyer, T. J. Kippenberg, and F. Marquardt, Cavity Optomechanics (Berlin, Springer-Verlag, 2014). Optomechanics2 A. M. Jayich, J. C. Sankey, B. M. Zwickl, C. Yang, J. D. Thompson, S. M. Girvin, A. A. Clerk, F. Marquardt, and J. G. E. Harris, New J. Phys. 10, 095008 (2008); A. Nunnenkamp, K. Borkje, J. G. E. Harris, and S. M. Girvin, Phys. Rev. A 82, 021806(R) (2010); J. C. Sankey, C. Yang, B. M. Zwickl, A. M. Jayich, and J. G. E. Harris, Nat. Phys. 6, 707 (2010); J.-Q. Liao and F. Nori, Sci. Rep. 4, 6302 (2014); D. Lee, M. Underwood, D. Mason, A. B. Shkarin, S. W. Hoch, and J. G. E. Harris, Nat. Commun. 6, 6232 (2014). Optomechanics3 K. Park, P. Marek, and R. Filip, Phys. Rev. A 92, 033813 (2015).§ ACKNOWLEDGMENTWe acknowledge Project GB14-36681G of the Czech Science Foundation. K.P. acknowledges support by the Development Project of Faculty of Science, Palacký University. § AUTHOR CONTRIBUTIONS STATEMENTK.P. conceived the theory. P.M. and R.F. conceived the quantification, interpreted the implications and extended the scope. P.M. and R.F. led the project.All authors analyzed the results, wrote the article, and reviewed the manuscript. § ADDITIONAL INFORMATIONSupplementary informationaccompanies this paper at doi:; Competing financial interests The authors declare that they have no competing interests.
http://arxiv.org/abs/1706.09020v1
{ "authors": [ "Kimin Park", "Petr Marek", "Radim Filip" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170627191950", "title": "Deterministic nonlinear gates with oscillators mediated by a qubit" }
APS/123-QED Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, SwitzerlandWe analyze an open many-body system that is strongly coupled at its boundaries to interacting quantum baths. We show that the two-body interactions inside the baths induce emergent phenomena in the spin transport. The system and baths are modeled as independent spin chains resulting in a global non-homogeneous XXZ model. The evolution of the system-bath state is simulated using matrix-product-states methods. We present two phase transitions induced by bath interactions. For weak bath interactions we observe ballistic and insulating phases. However, for strong bath interactions a diffusive phase emerges with a distinct power-law decay of the time-dependent spin current Q∝ t^-α. Furthermore, we investigate long-lasting current oscillations arising from the non-Markovian dynamics in the homogeneous case, and find a sharp change in their frequency scaling coinciding with the triple point of the phase diagram. Emergent transport in a many-body open system driven by interacting quantum baths Vincenzo Savona December 30, 2023 =================================================================================§ INTRODUCTION Non-equilibrium dynamics of quantum many-body systems have recently become the subject of considerable theoretical investigation. Of particular interest has been the question, foundational to quantum statistical mechanics, of equilibration and thermalization of many-body systems arising from unitary dynamics <cit.>. Largely responsible for this surge in interest are breakthroughs in experimental methods in the field of ultracold atoms, which make it possible to reproduce model Hamiltonians with great accuracy and investigate their unitary dynamics with unprecedented insulation from the environment <cit.>.From this context, the study of non-equilibrium phase transitions has emerged as a field of its own. These transitions differ significantly from equilibrium transitions in that they are not well understood as arising from thermal or quantum fluctuations<cit.>, thus creating a need for new theoretical approaches <cit.>. The study of transport in boundary-driven 1D systems provides a suitable paradigm to study these critical phenomena. The XXZ spin chain is an attractive choice for this purpose, both for its relative simplicity and ability to accurately describe real materials <cit.>.Transport in the XXZ model has been investigated under the assumption of Markovian coupling. At low bias near infinite temperature, where linear response theory is valid, diffusive and ballistic transport phases have been observed, with a transition at the Heisenberg point <cit.>. Investigations at high bias have instead revealed a ballistic and an insulating phase, separated by a subdiffusive Heisenberg point <cit.>. The limitations of the Markovian approach are twofold.Firstly, the Markovian assumption is by definition valid only for weak coupling between system and bath. Secondly, in the case of weak coupling between system and bath but strong interactions within the system, a Markovian description is only available if one can obtain a full eigendecomposition of the system Hamiltonian, which may easily be beyond computational reach. Indeed, to derive the master equation by the book, all system operators in system-bath coupling should be expressed in the interaction picture, which results in expressing them in the basis of eigenoperators of the system Hamiltonian <cit.>. If the couplings within the system are weak, the eigenoperators of the non-interacting system may be used as an ersatz, yielding a local-phenomenological master equation. Such an approach is however insufficient to model strong couplings, as has been recently shown by <cit.>. Modeling both bath and system within a Hamiltonian formalism provides instead a way to investigate the regime of strong coupling and strong system interactions.Such a methodological shift has already proved fruitful, with several studies having investigated the dynamics of quenches arising from the junction of two spin chains. The junction of two XXZ chains has recently been shown to give rise to ballistic and diffusive transport phases <cit.>. Motivated in part by the integrability of its dynamics, investigations of this setup have covered a large range of topics such as light cone velocities <cit.>, entanglement spreading <cit.>, energy transport arising from joining chains of different temperatures <cit.> and emerging hydrodynamics <cit.>. Two junction setups have also been studied : an XXZ chain coupled to two XX chains acting as magnetization reservoirs was found to behave similarly to the Markovian full-bias regime, with ballistic and insulating phases separated by a subdiffusive critical point. <cit.>. An additional opportunity opened by purely Hamiltonian evolution that has yet to be addressed is the possibility of investigating systems coupled to interacting baths. Indeed Markovian coupling requires the baths to be composed of non-interacting particles, and due to the prevalence of the Markovian paradigm in the field of open quantum systems the effects of interactions in the baths have been left mostly unexplored.In this work, we present evidence of critical behavior arising from bath interactions in a strongly coupled boundary-driven spin chain. § MODEL We study the dynamics of a tripartite XXZ chain, which is sketched in Fig. <ref>. The first and third parts of the chain play the role of positive and negative leads of a magnetization battery, and will be referred to as the battery leads. The middle part will be referred to as the system. We call N_B the length of the batteries and N_S the length of the system. Sites N_B and N_S + N_B are situated at the interfaces of battery leads and system, and will be referred to as the junctions. Unless specified otherwise, N_B = 1.5N_S. The Hamiltonian for the entire chain can be expressed in terms of Pauli matrices asH = ∑_i^2N_B + N_S - 1 J(X_iX_i+1 + Y_iY_i+1)+ U_iZ_iZ_i+1 U_i =U_B , ifi ≤ N_Bori ≥ N_B+N_S U_S, otherwisewith J the spin hopping rate and U_B,U_S the spin repulsions inside the battery leads and system respectively.At the start of the simulation, we prepare the battery leads in the |↑↑...↑↑↑⟩ and |↓↓...↓↓↓⟩ states. The system is prepared in the ground state of its XXZ Hamiltonian |G⟩. The initial state of the whole chain is thus |Ψ⟩ = |↑↑...↑↑↑⟩|G⟩|↓↓...↓↓↓⟩.The dynamics resulting from this initial state can be understood as the result of two local quenches occurring at the junctions. These quenches spawn excitations that propagate throughout the chain.Our global Hamiltonian being non-homogeneous, it is not solvable by Bethe Ansatz techniques. We rely instead on DMRG methods, which have proven efficient at simulating local quenches. Simulation of the system is performed using time-dependent matrix product state techniques (tMPS). Time evolution is performed using second-order Trotter-Suzuki decomposition with time step dt = 0.05/J and maximal bond dimension D = 500. The transport properties are studied by computing the spin currents Q_i = 2J⟨X_iY_i+1 - Y_iX_i+1|$⟩, which appear in the continuity equation⟨Ż_̇i̇|=⟩ Q_i-1 - Q_i. Of particular interest are the current at the positive lead junction which we denote asQand the current in the middle of the system,Q_m. The time dependence ofQreveals two distinct transient regimes. We noteτ_1andτ_2the end of each transient regime, andQ(τ≥τ_2)the quasi-steady-state current.In previous literature there have been two main strategies for characterising the type of transport. The spreading of local excitations <cit.> and the system size scaling of persistent currents <cit.>. We note that both approaches may be directly linked as discussed in <cit.> and in principle only one of the above criteria should be enough to characterize the transport, however we have found in practice by performing both analyses that the time behaviour gives more consistent conclusions. Let us assume the current to scale with the system size asQ∝N^-γ. We have that ifγ=0the system is a perfect ballistic conductor,γ<1indicates super-diffusion,γ=1diffusion andγ>1super-diffusion. This also translates into the time behaviour of the current. In the spirit of spreading of inhomogeneities we consider the total magnetization transferred form one of the baths Δ Z(t)=∫_0^t Q(τ)∝ t^δsuch thatδ=1indicates ballistic transport,δ>1/2super-diffusion,δ=1/2diffusion andδ<1/2sub-diffusion. Furthermore, ifQ(t)∝t^-αwe may identifyα=1-δ. A relation betweenγandαmaybe expected, however we find no obvious functional form.We point out that the phenomenological master equation driving in <cit.> ensures persistent currents even outside the ballistic phase. In contrast, our simulations that explicitly model the bath do not guarantee that currents will persist in the infinite time limit. Therefore, even though we make an effort to relate the current work to the finite size scaling in <cit.> we find the time behaviour of the current to be a more appropriate object for study.§ CONJECTURED PHASE DIAGRAMIn Fig. <ref> we present junction currents in the second transient regime as a function ofU_SandU_Bobtained for a system of sizeN_S = 20. At a glance, one sees a square area of high current defined bymax(U_B,U_S) < 1. We show that this area exhibits ballistic transport at and below the lineU_S = U_Bline and super-diffusive transport above the line. Outside the square, another separation can be seen along theU_S = U_Bline, with much greater current above it than below. This motivates us to distinguish two additional phases: a sub-diffusive phase above the line and an insulating phase below. We show that the currentQ(τ)has power law time decay in the generalized-diffusive phases but exponential decay in the insulating phase. It should be noted that the anamalous-diffusive phases are a novel feature, contingent on the presence of interactions in the bath. The above description of the phase diagram is specific regarding the type of diffusion found in each region. However, our focus here is not the precise determination of the anomalous diffusion exponents since these are also plagued by numerical and finite-size effects. Therefore, we note that in some cases we refer to all the diffusive-type phases simply as diffusive when it comes to differentiating them with respect to the ballistic and insulating phases. Fig. <ref> presents magnetization and current profiles characteristic of the three phases. A few general features of the dynamics can be noted.In all phases, one can see two light cones arising from the quenches at the junctions. This structure gives rise to two transient regimes of the junction current. The first regime lasts until the light cone from one junction crosses the system and hits the opposite junction. We refer to this time asτ_1. Behavior in all phases is similar in this regime: current starts to flow from both leads into the system. The dynamics of this regime are those of a single battery-system junction.It is instead the second transient regime and the quasi-steady-state that reveal the differences between the phases. In contrast to the first transient, their behavior is dictated by the interference of the two light cones. In the ballistic phase, the merging of the light cones gives rise to a finite value of the current and a smooth magnetization profile. In the insulating phase, we instead observe destructive interference causing a sharp drop of the current to 0. The magnetization profile displays staggered order in the system and a sharp step of the magnetization profile in the middle. In the diffusive phase, we observe instead a remarkably different evolution of the profile. The magnetization gradient in the system can actually be reversed, with⟨Z_i|<⟩ 0 close to the positive lead, and vice-versa at the negative lead. In addition, the net drop of the current to 0 is much slower with fast oscillations. These differences in the current time-dependence and magnetization profiles provide evidence that the diffusive phase is a novel phase induced by bath interactions. To complement these qualitative observations, we provide a finite-size scaling analysis of the ballistic-insulating and ballistic-diffusive transitions, as well as quantitative evidence for the distinct dynamical signatures at the insulating-diffusive transition.In FIg. <ref> we address the ballistic-insulating transition and we show the time evolution of the currentQthat takes persistent non-vanishing values only forU_S≤1. The dynamical behaviour given a very sharp indicator of the transition.Finite-size scaling of quasi-steady state current was investigated along the ballistic-insulating transition. The Non-Markovian dynamics induce oscillations of the current around its average even at long times. For this reason we fit the time-averaged currentQ̅ = 1/T-τ_2∫_τ_2^T Q(t)dtwith respect to system size . Fig. <ref> presents the results of a power-law fit for the ballistic phase and exponential fit for the insulating phase.ForU_S < 1, the vanishing exponent is a clear indication of system-size independence and ballistic behavior. However, we recognize finite size effects give a small but non zero exponent especially when closer to the transition atU_S=1. At the transition, we observe approximately normal diffusionγ≈0.9. Above the transition point the values of the current are small and MPS truncation errors become relevant, especially for large system sizes. Our scaling data would suggest very weak diffusion, however due to the dynamical fast drop of the current in this regime our best interpretation is that an exponential scaling emerges: in this circumstance, we consider dynamical features to be better indicators than the scaling. This motivates our choice of reporting the exponential fits in Fig. <ref>. All these findings are similar to what was found in theU_B = 0case in <cit.>, suggesting the bath interaction plays no meaningful role in this region of the phase diagram. The same procedure was performed for the ballistic-diffusive transition.In Fig. <ref> we show the time dynamics across the ballistic-diffusive transition as we increase the bath interaction. Decaying power-laws emerge as we approachU_B=1. Our data suggests the diffusion point to be atU_B≈0.85, however due to finite size simulations and truncation errors are results do not allow us to draw the precise location of the diffusive point nor weather or not the weak power laws preceding it are just due to finite size effects. What we have certainly stablished is that for very smallU_Bthe system is a ballistic conductor and as we increaseU_Bthe system turns into diffusive and even sub-diffusing conductor. Results are presented for the finite size scaling in Fig. <ref>. We find weak system-size dependence forU_B < 1, which becomes stronger approachingU_B ≥1. Here however, the finite size scaling seems to be a less meaningful analysis. Our analysis would suggest super-duffusive behaviour however, the fast power-laws in Fig. <ref> indicates sub-diffusion. Regardless of the precise exponents and transition point the fact the the bath interactions induce generic diffusive behaviour in evident. We now turn to quantifying the time-dependence difference between the diffusive and insulating phase. We investigate data points on a line perpendicular toU_S = U_B, which we parametrize bysas[ U_B; U_S ] =[ 1.5; 1.5 ] + s1/√(2) [1; -1 ]. The results are presented in Fig. <ref>. On the left, current as a function of time is drawn for a point in each phase and a point on the diagonal forN_S = 50. Inside the insulating phase (s=-0.3) we can see the dynamical signature of this regime which is the fast drop of the current towards zero. Exactly at the diagonal (s=0) we have the transition point in which we can see two distinct features. Persistent fast oscillations are the trait of the transition point. These oscillations carry, however, an envelope given by a time-algebraic decayQ∝t^-αwhich is the signature of the novel diffusive phase that extends above the diagonal (s=0.3). We have fitted the time evolution of the current after the interference of the light cones both with an exponential and a power law. The Power-law and exponential fit coefficients and errors are presented on the right. As one can see from the fit errors,s <0 is better described by an exponential decay, while at the transition and beyonds ≥0the power-law is a better description. The two features are consistent with insulating and diffusive transport respectively, and therefore concur with the results of finite-size scaling.Finally, we compare our protocol to the one in <cit.> in the case in which bath and system interactions as the same resulting in a homogeneous hamiltonian with different inhomogeneous initial conditions. Our results in Fig.<ref> indicate ballistic transport belowU<1with a sharp transition to sub-diffusion while the results in <cit.> indicate normal diffusion. We not that our finding do not contradict  <cit.> since the initial conditions are markedly different. § OSCILLATIONS AND TRAPPED QUASI-PARTICLES As one can notice in Fig. <ref>, oscillations of the current appear in the the system after the light cones collide. The domain where these oscillations occur is identical to the domain where magnetization is close to 0. Thus, these oscillations spatially expand in the ballistic phase, but remain localized inside the system in both it the diffusive and insulating phases.To further characterize these oscillations, we investigate the midsection currentQ_m = Q_N_B + N_S/2and the Fourier transform of its oscillations around the meanQ̂_m(ν). For a system of sizeN_S = 50, we place the beginning of the Fourier analysis att = 10/J. We focus on the homogeneous systemU_S = U_B = U, which includes points from the ballistic phase as well as the diffusive-insulating phase boundary. Figs.<ref> and <ref> present closeups of the currents in the system for the ballistic phase and diffusive-insulating phase boundary on the left.Q̂_m(ν)is presented on the right.We distinguish three main oscillations. The higher frequency peak, with a frequency between 1.2 and 1.6, is responsible for the checkerboard pattern visible in both current pictures. The middle peak, with a frequency between 0.3 and 0.6, is best visible in the ballistic phase, where it is responsible for the larger pattern visible in Fig. <ref>. The lower peak, with a frequency between 0 and 0.3, only appears on the phase boundary, and is responsible for the pattern in Fig. <ref>. It is of much higher amplitude than the medium oscillation, and thus overshadows it in this regime, although all three peaks are discernible in the spectrum. A remarkable feature of these oscillations is their persistence in the long time limit, which is only possible in the diffusive phase due to the very slow relaxation to a stationary state. This feature is of course absent in the phenomenological Master equation description, and is an indicator of the non-Markovian character of the strong coupling regime studied here. As previously mentioned, the low frequency oscillation is unique in that it remains trapped in the middle of the system. The combination of its localization and long-time persistence suggests a rather reminiscent analogy to classical solitons. These non-trivial phenomena highlight the relevance of studying the strong coupling non-Markovian regime from both a theoretical and experimental point of view.The frequencies of the oscillations are well described by the following expressions : ν_low = α max(U-J,0)ν_medium = α min(U,J)ν_high =α (2J + max(U-J,0)) Fig. <ref> presents the result of fitting the data using a singleαfor all three frequencies. The qualitative change of frequency scaling is remarkable and coincides with the change from ballistic phase to insulating-diffusive phase boundary.§ CONCLUSION We have presented a study of the effects of bath interactions on the transport phases of a non-Markovian boundary-driven spin chain. Behavior forU_B < 1was analogous to previous results for non-interacting baths <cit.>. AboveU_B ≥1we have presented evidence for bath interaction induced transitions to a novel diffusive phase which we characterized by power-law finite-size scaling. Most importantly, we have shown that this diffusive phase has a distinctive long-time-algebraic decay of the currentQ∝t^-α. AlongU_B = U_S, ballistic and diffusive behavior was found, analogous to results from the single-junction case <cit.>. In the homogeneous system, we have characterized the oscillations arising from the non-Markovian aspect of the strong coupling. Their amplitude was found to not decay at long times, and spatial localization of one of the oscillations was observed. The scaling of oscillation frequencies was found to sharply change at the Heisenberg point, coinciding with transition from ballistic phase to insulating-diffusive boundary. These findings attest to the relevance of considering non-Markovian coupling that goes beyond the local-phenomenological master-equation treatment. As a future perspective, it would interesting to further explore the quasi-particle picture to better understand the interference of the light-cones and the nature of the trapped oscillations we have observed. One possible route is the Bethe-based Hydrodynamical approach recently introduced and applied to integrable regimes <cit.>.
http://arxiv.org/abs/1706.08272v3
{ "authors": [ "Juris Reisons", "Eduardo Mascarenhas", "Vincenzo Savona" ], "categories": [ "quant-ph", "cond-mat.quant-gas" ], "primary_category": "quant-ph", "published": "20170626081622", "title": "Emergent transport in a many-body open system driven by interacting quantum baths" }
A Comprehensive Review of Cauchy Stress Tensor, First Paper]Cauchy Tetrahedron Argument and the Proofs of the Existence of Stress Tensor, a Comprehensive Review, Challenges, and Improvements E. Azadi]Ehsan Azadi^1 [1]Department of Mechanical Engineering, Sharif University of Technology, Iran.E-mail address: [email protected] [ [===== In 1822, Cauchy presented the idea of traction vector that contains both the normal and tangential components of the internal surface forces per unit area and gave the tetrahedron argument to prove the existence of stress tensor. These great achievements form the main part of the foundation of continuum mechanics. For about two centuries, some versions of tetrahedron argument and a few other proofs of the existence of stress tensor are presented in every text on continuum mechanics, fluid mechanics, and the relevant subjects. In this article, we show the birth, importance, and location of these Cauchy's achievements, then by presenting the formal tetrahedron argument in detail, for the first time, we extract some fundamental challenges. These conceptual challenges are related to the result of applying the conservation of linear momentum to any mass element, the order of magnitude of the surface and volume terms, the definition of traction vectors on the surfaces that pass through the same point, the approximate processes in the derivation of stress tensor, and some others. In a comprehensive review, we present the different tetrahedron arguments and the proofs of the existence of stress tensor, discuss the challenges in each one, and classify them in two general approaches. In the first approach that is followed in most texts, the traction vectors do not exactly define on the surfaces that pass through the same point, so most of the challenges hold. But in the second approach, the traction vectors are defined on the surfaces that pass exactly through the same point, therefore some of the relevant challenges are removed. We also study the improved works of Hamel and Backus, and indicate that the original work of Backus removes most of the challenges. This article shows that the foundation of continuum mechanics is not a finished subject and there are still some fundamental challenges. § INTRODUCTIONIn 1822, for the first time, Cauchy in his lecture announced the forces on the surface of an internal mass element in continuum media in addition to the normal component on the surface can have the tangential components. An abstract of his lecture was published in 1823, <cit.>. In translation of Cauchy's lecture from the French by Maugin (2014, <cit.>), on page 50, we have:However, the new “pressure” will not always be perpendicular to the faces on which it act, and is not the same in all directions at a given point.…Furthermore, the pressure or tension exerted on any plane can easily be deduced, in both amplitude and direction, from the pressures or tensions exerted on three given orthogonal planes. I had reached this point when M. Fresnel, who came to me to talk about his works devoted to the study of light …Here the new pressure is the traction vector that acts on the internal surface and contains both the normal and tangential components. Cauchy's works in continuum mechanics from 1822 to 1828 led to the derivation of Cauchy lemma for traction vectors, the existence of stress tensor, Cauchy equation of motion, symmetry of stress tensor, and some other achievements in the foundation of continuum mechanics <cit.>. Cauchy's proof of the existence of stress tensor is called Cauchy tetrahedron argument. From Truesdell (1971, <cit.>), on page 8:CAUCHY's theorem of the existence of the stress tensor, published in 1823. CAUCHY, who knew full well the difference between a balance principle and a constitutive relation, stated the result clearly and proudly; he gave a splendid proof of it, which has been reproduced in every book on continuum mechanics from that day to this; and he recognized the theorem as being the foundation stone it still is.On the importance of Cauchy's idea for traction vector and tetrahedron argument for the existence of stress tensor, Truesdell(1968, <cit.>), on page 188, says:Clearly this work of Cauchy's marks one of the great turning points of mechanics and mathematical physics, even though few writers on the history of that subject seem to know it, a turning point that could well stand comparison with Huygens's theory of the pendulum, Newton's theory of the solar system, Euler's theory of the perfect fluid, and Maxwell's theories of the monatomic gas and the electromagnetic field.This article gives a comprehensive review of the tetrahedron arguments and the proofs of the existence of stress tensor that represented during about two centuries, from 1822 until now, in many books and articles on continuum mechanics, fluid mechanics, solid mechanics, elasticity, strength of materials, etc. There are some different methods and processes to prove the existence of stress tensor and presentation of the Cauchy tetrahedron argument in the literature. We extract some fundamental challenges on these proofs and discuss these challenges in each one. To enter the subject, we first show the location of the Cauchy tetrahedron argument for the existence of stress tensor in the general steps of the foundation of continuum mechanics. Then, a formal proof of the Cauchy tetrahedron argument according to the accepted reference books will be given. We extract some fundamental challenges on this proof and discuss their importance in the foundation of continuum mechanics. Then we review different proofs in the literature and discuss their challenges. During this review, we also show the general approaches, important works, and their improvements. § LOCATION OF CAUCHY TETRAHEDRON ARGUMENT IN THE FOUNDATION OF CONTINUUM MECHANICSAlthough the birth of modern continuum mechanics is considered as the Cauchy's idea in 1822 <cit.>, some remarkable achievements were obtained earlier by famous mathematical physicians like Daniel Bernoulli, Euler, D'Alembert, Navier, Poisson, and the others. In general, these achievements can be addressed as the splitting of forces to the body forces and surface forces, the defining of pressure as the normal surface force per unit area, the considering of the internal mass element in continuum media, the Euler equation of motion, etc. But this was the genius of Cauchy to use the idea of his friend Fresnel, who worked on optics, in continuum mechanics and develop the idea of traction vector, the existence and properties of stress tensor, and the general equation of motion <cit.>. Cauchy's achievements were quickly taken as the foundation of continuum mechanics and the relevant subjects such as fluid mechanics, solid mechanics, elasticity, mechanics of deformable bodies, strength of materials, etc., <cit.>. Recently, a good representation and description of the Cauchy's papers and the situation of continuum mechanics at that time was given by Maugin (2014, <cit.>).The general steps that lead to the general concept of stress in continuum mechanics can be described as the following. Some of these steps were developed before Cauchy and others were developed or revised by Cauchy based on the new idea of traction vector that contains both the normal and tangential components on the surface.* The forces that apply to a fluid or solid element in continuum media can split to the surface forces (F_s) and the body forces (F_b), (before Cauchy).F=F_s + F_b* The surface force can be formulated as surface force per unit area that is called pressure and is normal to the surface that it acts, (before Cauchy). * The surface force per unit area in addition to the normal component (t_n) can have tangential components (t_t). This general surface force per unit area is called traction vector, (by Cauchy in 1822).t= t_n e_n+t_t e_t* The traction vector depends only on the position vector (r), time (t), and the outward unit normal vector (n) of the surface that acts on it in continuum media, (by Cauchy). t=t(r,t,n)* The traction vectors acting on opposite sides of the same surface at a given point and time are equal in magnitude but opposite in direction. This is called Cauchy lemma, (by Cauchy). t(r,t,n)= -t(r,t,-n)* Cauchy tetrahedron argument states that the relation between the traction vector on a surface and the unit normal vector of that surface is linear, and this leads to the existence of a second order tensor that is called stress tensor. The stress tensor T depends only on the position vector and time, (by Cauchy). t= T^T.n whereT=T(r,t)= [ T_xx T_xy T_xz; T_yx T_yy T_yz; T_zx T_zy T_zz ] * Applying the conservation of linear momentum to a mass element in continuum media leads to the general differential equation of motion that is called Cauchy equation of motion, (by Cauchy). ρa= ∇ . T + ρbor ρ (∂v/∂ t+(v.∇)v)= ∇ . T + ρbwhere ρ, b, a, and v are the density, body force per unit mass, acceleration, and velocity, respectively. * The conservation of angular momentum shows that the stress tensor is symmetric, (by Cauchy). T_xy=T_yx,T_xz=T_zx,T_yz=T_zy or T=T^T These steps show the location of Cauchy tetrahedron argument for the existence of stress tensor in the foundation of continuum mechanics. § CAUCHY TETRAHEDRON ARGUMENT AND THE CHALLENGESThe following representation of Cauchy tetrahedron argument is based on the two remarkable reference books on continuum mechanics, i.e., “Truesdell and Toupin, The Classical Field Theories, pp. 542-543” (1960, <cit.>) and “Malvern, Introduction to the Mechanics of a Continuous Medium, pp. 73-77” (1969, <cit.>). Here we give more details to show clearly the process. §.§ Cauchy tetrahedron argumentConsider a tetrahedron element in continuum media that its vortex is at the point o and its three orthogonal faces are parallel to the three orthogonal planes of the Cartesian coordinate system. The fourth surface of the tetrahedron, i.e., its base, has the outward unit normal vector n_4. The geometrical parameters and the average values of the traction vectors on the faces of tetrahedron are shown in Figure <ref>.[20]< g r a p h i c s >The geometry of tetrahedron element and the average traction vectors on the faces. The integral equation of conservation of linear momentum on a mass elementin continuum media is: ∫_∂tdS +∫_ρbdV = ∫_ρadVNow this law applies to the tetrahedron mass element. By averaging variables on the volume and faces of the tetrahedron element, the equation (<ref>) becomes: t_4 Δ s_4 +t_1 Δ s_1 +t_2 Δ s_2 +t_3 Δ s_3 +ρbΔ V = ρaΔ V where the superscripts indicate the average values of these terms. The following geometrical relations for the faces and volume of tetrahedron hold:Δ s_1=n_x Δ s_4, Δ s_2=n_y Δ s_4,Δ s_3=n_z Δ s_4 Δ V= 1/3 h Δ s_4 where n_x, n_y, and n_z are components of the outward unit normal vector on Δ s_4, i.e., n_4=n_xe_x+n_ye_y+n_ze_z. Here h is the altitude of the tetrahedron. By substituting these geometrical relations into the equation (<ref>): t_4 Δ s_4 +t_1 (n_x Δ s_4) +t_2 (n_y Δ s_4) +t_3 (n_z Δ s_4) +ρb (1/3 h Δ s_4) = ρa (1/3 h Δ s_4) dividing through by Δ s_4 t_4 + n_x t_1+ n_y t_2 +n_z t_3+ρb (1/3 h ) = ρa (1/3 h)Now decrease the volume of tetrahedron element, Δ V→ 0, in the way that n_4 and the position of the vertex point of tetrahedron (point o) do not change. As a result, h→ 0 and the tetrahedron shrinks to a point. So, in this limit, the body force and inertia term in the equation (<ref>) go to zero and the average traction vectors go to the exact values. The result is: t_4 + n_x t_1+ n_y t_2 +n_z t_3 = 0 The traction vector t_1 is applied to the surface Δ s_1 by the unit normal vector n_1=-1e_x. Using the Cauchy lemma, i.e., t(r,t,n)=-t(r,t,-n): t(n_1)=-t(-n_1) but -n_1=+1e_x is the unit normal vector on the positive side of coordinate plane yz. If t_x is the traction vector on the positive side of coordinate plane yz, then by using the equation (<ref>): t_1=-t_x This strategy for t_2 and t_3 leads to:t_2=-t_y , t_3=-t_zBy substituting these relations into the equation (<ref>):t_4 + n_x (-t_x)+ n_y (-t_y) +n_z (-t_z) = 0 so t_4 = n_x t_x+ n_y t_y +n_z t_zThe traction vectors t_x, t_y, and t_z can be shown by their components as: t_x = T_xxe_x+ T_xye_y +T_xze_z t_y = T_yxe_x+ T_yye_y +T_yze_z t_z = T_zxe_x+ T_zye_y +T_zze_zBy substituting these definitions into the equation (<ref>): t_4= n_x(T_xxe_x+ T_xye_y +T_xze_z) + n_y( T_yxe_x+ T_yye_y +T_yze_z)+ n_z(T_zxe_x+ T_zye_y +T_zze_z) ort_4= (n_x T_xx + n_y T_yx +n_z T_zx) e_x + (n_x T_xy + n_y T_yy +n_z T_zy) e_y+(n_x T_xz + n_y T_yz +n_z T_zz) e_z This can be shown as a relation between a second order tensor and a vector, as:t_4= [ t_x; t_y; t_z ]_4= [ n_x T_xx + n_y T_yx +n_z T_zx; n_x T_xy + n_y T_yy +n_z T_zy; n_x T_xz + n_y T_yz +n_z T_zz ]= [ T_xx T_xy T_xz; T_yx T_yy T_yz; T_zx T_zy T_zz ]^T [ n_x; n_y; n_z ]_4thereforet_4=T^T .n_4By forming the tetrahedron element, no one of the components of n_4 is zero. For the unit normal vectors that one or two of their components are equal to zero, the tetrahedron element does not form but due to the continuous property of the traction vectors on n and the arbitrary choosing for any orthogonal basis for the coordinate system, the equation (<ref>) is valid for these cases, as well. So, the subscript 4 can be removed from this equation:t=T^T .nThis equation shows that there is a second order tensor that is called stress tensor for describing the state of stress. This tensor T=T(r,t), depends only on the position vector and time. Also, the relation between the traction vector on a surface and the unit normal vector of that surface is linear. Here the tetrahedron argument is finished. This argument and its result have a great importance and role in the foundation of continuum mechanics.The following statements are not the elements of the tetrahedron argument and we state them to show the two other important achievements of Cauchy in the foundation of continuum mechanics. Cauchy applied the conservation of linear momentum to a “cubic element” and using his previous achievements, derived the general equation of motion that is called Cauchy equation of motion <cit.>. ρa= ∇ . T + ρbor ρ (∂v/∂ t+(v.∇)v)= ∇ . T + ρb Also, by applying the conservation of angular momentum to a “cubic element”, he showed that the stress tensor is symmetric <cit.>.T=T^T§.§ The challenges During study of the presented tetrahedron argument we found some conceptual challenges on it. In the following, we present and discuss them. * Challenge 1: Note that applying the conservation of linear momentum to any mass element with any shape must lead to the general equation of motion that contains all of the effective terms including inertia, body forces, and surface forces (Cauchy equation of motion). But in this argument applying the conservation of linear momentum to the tetrahedron element leads to the equation (<ref>), i.e., t_4 + n_x t_1+ n_y t_2 +n_z t_3 = 0, that differs from the equation of motion (<ref>), because the inertia and body forces do not exist in it. We saw that after presenting the tetrahedron argument, Cauchy and most of the authors derived the equation of motion by applying the conservation of linear momentum to a cubic element. What is the problem? applying the conservation of linear momentum to a tetrahedron element leads to the equation t_4 + n_x t_1+ n_y t_2 +n_z t_3 = 0 and the same process on a cubic element leads to the Cauchy equation of motion.* Challenge 2: The tetrahedron argument is based on the limit Δ V→ 0, that is stated by all of the authors who presented this argument by the expressions like “Δ V→ 0”, “h→ 0”, “when the tetrahedron shrinks to a point”, or “when the tetrahedron shrinks to zero volume”, while it must be proved that the existence of stress tensor at a point does not depend on the size of the considered mass element. In other words, the stress tensor exists for any size of mass element in continuum media where the volume of element increases, decreases, or does not change. By these proofs the result is valid only for the infinitesimal volumes and they did not show that this result can be applied to a mass element with any volume in continuum media. * Challenge 3: This tetrahedron argument is based on the average values of the effective terms in the integral equation of conservation of linear momentum and even for the limit Δ V→ 0 this trend remains. While the stress tensor and the traction vectors relations are point-based and they must be derived from the exact point values. * Challenge 4: During the tetrahedron argument we have the equation (<ref>): t_4 + n_x t_1+ n_y t_2 +n_z t_3+ρb (1/3 h ) = ρa (1/3 h) If we rewrite this equation as following, and take the limit:lim_h → 0(t_4 + n_x t_1+ n_y t_2 +n_z t_3/1/3h)= lim_h → 0(ρa-ρb)clearly the right hand side limit exists, because ρa-ρb is bounded and is not generally equal to zero in continuum media. So, the left hand side limit must be existed and is not generally equal to zero. This implies that the order of magnitude of the denominator is h, i.e.:O(t_4 + n_x t_1+ n_y t_2 +n_z t_3) = hso, t_4 + n_x t_1+ n_y t_2 +n_z t_3 andρa (1/3 h ) -ρb (1/3 h) have the same order of magnitude, that is, h. This means that by h → 0 these two parts decrease by the same rate to zero and we cannot tell that the inertia and body terms go to zero faster than the surface terms. Since O(Δ s_4)=h^2 and O(Δ V)=h^3, we have: O(Δ s_4(t_4 + n_x t_1+ n_y t_2 +n_z t_3)) = O(t_4 Δ s_4+ t_1 Δ s_1+t_2 Δ s_2+t_3 Δ s_3) = h^3andO(ρaΔ V -ρbΔ V)= h^3 so, we cannot tell that if Δ V → 0 or h → 0 then the surface terms go to zero by h^2 and the inertia and body terms go to zero by h^3, because these two parts have the same order of magnitude, i.e., h^3, as shown above in (<ref>) and (<ref>). * Challenge 5: The purpose of Cauchy tetrahedron argument is to show that the traction vector at a point on a surface is a linear combination of the traction vectors on the three orthogonal surfaces that pass through that point. So, the four surfaces must pass through the same point to prove this relation between their traction vectors. But in the tetrahedron argument t_4 is defined on the surface Δ s_4 that does not pass through the vertex point of tetrahedron where the three surfaces Δ s_1, Δ s_2, and Δ s_3 pass through it, see Figure <ref>.[20]< g r a p h i c s >Inclined surface that is parallel to Δ s_4 and passes through point o.* Challenge 6: The stress tensor is a point-based function. This means at any point in continuum media the stress tensor exists. So, in the equation t_4 + n_x t_1+ n_y t_2 +n_z t_3 = 0 the four traction vectors must belong to a unit point to conclude from the tetrahedron argument that t_4 is related to a tensor that forms by the components of t_1, t_2, and t_3. While in this proof, the surface that t_4 is defined on it, i.e., Δ s_4, does not pass through point o, even for an infinitesimal tetrahedron element, see Figure <ref>.* Challenge 7: The result of this argument is the equation (<ref>), i.e., t_4 + n_x t_1+ n_y t_2 +n_z t_3 = 0, for an infinitesimal tetrahedron. Here the traction vectors are the average values on the faces of this infinitesimal tetrahedron. If we multiply this equation by Δ s_4 that is the base area of the tetrahedron, the result is:Δ s_4(t_4 + n_x t_1+ n_y t_2 +n_z t_3) = t_4 Δ s_4+ t_1 Δ s_1+t_2 Δ s_2+t_3 Δ s_3 =0 but this is equal to the integral of t over the surface of , so: t_4 Δ s_4+ t_1 Δ s_1+t_2 Δ s_2+t_3 Δ s_3 = ∫_∂tdS =0whereis the infinitesimal tetrahedron element. This equation states that for the infinitesimal tetrahedron element the sum of the traction vectors on the surfaces of this element is zero. This means the surface forces have not any effect on the motion and acceleration of the element because their sum on the faces of element is zero. But this is not correct, since for any volume of mass element, even infinitesimal volume, the equation of conservation of linear momentum (<ref>), the following equation, holds and tells us that this sum is not zero:∫_∂tdS +∫_ρbdV = ∫_ρadV* Challenge 8: In the previous challenge the equation (<ref>), t_4 Δ s_4+ t_1 Δ s_1+t_2 Δ s_2+t_3 Δ s_3 =0, states that the sum of surface forces on faces of the infinitesimal tetrahedron element is zero. So, it tells nothing about the relation between the traction vectors at a point on four different surfaces that pass through that point, because clearly t_4 is defined on Δ s_4 and this surface does not pass through point o, even for an infinitesimal tetrahedron element, see Figure <ref>.More discussions will be given in the next sections.§ A COMPREHENSIVE REVIEWThe tetrahedron argument for the existence of stress tensor followed by many significant scientists and authors during about two centuries from 1822 to the present by some different versions. These proofs lead to the linear relation between the traction vector and the unit outward normal vector of the surface. This argument shows that the stress tensor exists and is independent of the surface characters. In the following, we show the different processes to prove this argument that exist in many textbooks on continuum mechanics and the relevant subjects such as fluid dynamics, solid mechanics, elasticity, plasticity, strength of materials, mathematical physics, etc. §.§ The first approachStokes in the famous article (1845, <cit.>), uses the Cauchy tetrahedron argument. On page 295:…Suppose now the dimensions of the tetrahedron infinitely diminished, then the resolved parts of the external and of the effective moving forces will vary ultimately as the cubes, and those of the pressures and tangential forces as the squares of homologous lines. …The method of determining the pressure on any plane from the pressures on three planes at right angles to each other, which has just been given, has already been employed by MM. Cauchy and Poisson.So, from the part “now the dimensions of the tetrahedron infinitely diminished” we can tell that Stokes's proof is based on infinitesimal volume. In this expression the inertia and body terms “vary ultimately as the cubes” and surface terms vary “as the squares of homologous lines”. While we showed in the challenge 4 that the surface terms and the inertia and body term vary by the same order of magnitude, that is, h^3.Let see what is presented in the important book by Love, 1908. On pages 76-78 of the fourth edition of this book (1944, <cit.>), during the tetrahedron argument: ∭ρ f_x dxdydz = ∭ρ X dxdydz + ∬ X_v dS(1)[where f_x, X, and X_v are acceleration, body force, and surface traction, respectively, all in the x direction.][The comments in the brackets [ ] are given by the author of the present article.]46. Law of equilibrium of surface tractions on small volumes.From the forms alone of equations (1) …we can deduce a result of great importance. Let the volume of integration be very small in all its dimensions, and let l^3 denote this volume. If we divide both members of equation (1) by l^2, and then pass to a limit by diminishing l indefinitely, we find the equation lim_l → 0 l^-2∬ X_v dS =0 …The equations of which these are types can be interpreted in the statement: “The tractions on the elements of area of the surface of any portion of a body, which is very small in all its dimensions, are ultimately, to a first approximation, a system of forces in equilibrium.”…For a first approximation, when all the edges of the tetrahedron are small, we may take the resultant traction of the face [Δ s_4]…” So, here on these pages of Love's book, we see clearly the important challenges that are stated in the previous section. For example, “For a first approximation”, “when all the edges of the tetrahedron are small”, “Law of equilibrium of surface tractions on small volumes”, “Let the volume of integration be very small”, and clearly in the important statement inside the quotation marks that means for a first approximation, the summation of traction vectors on the surfaces of any portion of a body is zero when the portion is very small. We find that the Love's book is very important because it clearly and correctly represents the classical continuum mechanics in detail. For example, on these pages he has correctly stated that the results of Cauchy tetrahedron argument and the relation of traction vectors are approximate, for very small portion of body, and the relation between traction vectors is for the surfaces of mass element that do not pass through the same point. If instead of “divide both members of equation (1) by l^2” we divide them by l^3, then the limit l → 0 gives: lim_l → 0 l^-3∬ X_v dS = lim_l → 0 l^-3∭ρ (f_x -X) dxdydz = ρ (f_x -X)Similar to the challenge 4, here ρ (f_x -X) is a bounded value and is not generally equal to zero in continuum media. Therefore, for the existence of the limit on the left hand side the order of magnitude of the surface integral must be equal to l^3, i.e.:O(∬ X_v dS ) = l^3that is equal to the order of magnitude of the volume integrals. So, the surface tractions are not in equilibrium even on small volumes, but are equal to the volume terms including inertia and body forces. By dividing “both members of equation (1) by l^2” the order of magnitude of these two parts is l, thus in the “limit by diminishing l indefinitely”, these two parts go to zero by the same rate. This is the trivial solution of the equation and cannot be a rigorous base for the existence of stress tensor. The proofs in some books are similar to the Love's proof, for example Planck (1932, <cit.>), Serrin (1959, <cit.>), Aris (1989, <cit.>), Marsden and Hughes (1994, <cit.>), Ogden (1997, <cit.>), Leal (2007, <cit.>), Gonzalez and Stuart (2008, <cit.>). As a sample, in the book “Vectors, Tensors, and the Basic Equations of Fluid Mechanics” (1989, <cit.>) by Aris, the proof on pages 100-101 is:The principle of the conservation of linear momentum … d/dt∭ρvdV = ∭ρfdV + ∬t_(n)dS(5.11.3)…Suppose V is a volume of given shape with characteristic dimension d. Then the volume of V will be proportional to d^3 and the area of S to d^2, with the proportionality constants depending only on the shape. Now let V shrink on a point but preserve its shape, then the first two integrals in Eq. (5.11.3) will decrease as d^3 but the last will be as d^2. It follows that lim_d → 01/d^2∬t_(n)dS =0 (5.11.5) or, the stresses are locally in equilibrium. To elucidate the nature of the stress system at a point P we consider a small tetrahedron with three of its faces parallel to the coordinate planes through P and the fourth with normal n. …Then applying the principle of local equilibrium [Eq. (5.11.5)] to thestress forces when the tetrahedron is very small we have t_(n)dA - t_(1)dA_1 - t_(2)dA_2- t_(3)dA_3 = (t_(n) - t_(1)n_1 - t_(2)n_2 - t_(3)n_3)dA = 0. Now let T_ji denote the i^th component of t_j and t_(n)i the i^th component of t_(n) so that this equation can be writtent_(n)i = T_ji n_j.Let us see what is presented for the existence of stress tensor in the Timoshenko's books. In the book “Timoshenko and Goodier, Theory of Elasticity, 1934”, on page 213 according to the 1951 publication <cit.>: …If these components of stress at any point are known, the stress acting on any inclined plane through this point can be calculated from the equations of statics [They considered only the case where acceleration is zeroand the body forces can be neglected, so there is no volume integral]. Let O be a point of the stressed body and suppose the stresses are known for the coordinate planes …(Fig. 132). To get the stress for any inclined plane through O, we take a plane BCD parallel to it at a small distance from O, so that this latter plane together with the coordinate planes cuts out from the body a very small tetrahedron BCDO. Since the stresses vary continuously over the volume of the body, the stress acting on the plane BCD will approach the stress on the parallel plane through O as the element is made infinitesimal.In considering the conditions of equilibrium of the elemental tetrahedron [acceleration is zero] the body forces can be neglected. Also as the element is very small we can neglect the variation of the stresses over the sides and assume that the stresses are uniformly distributed …Timoshenko has repeated almost the same process and comments in another book “History of Strength of Materials” (1953, <cit.>). So, in these books we have the average values of the traction vectors on tetrahedron's faces and the traction vector on the base surface of the tetrahedron (surface BCD) is regarded as the traction vector on the inclined surface that is parallel to the surface BCD and passes through point O. Therefore, most of the challenges hold. Also, this proof is limited to the cases that the mass element is in equilibrium (acceleration is zero) and the body forces are neglected. Similar process and assumptions are used for the tetrahedron argument by Prandtl and his coauthors (English translation 2004, <cit.>). The proofs of the existence of stress tensor in some books are based on nearly similar process and assumptions to the above process, for example Sommerfeld (1950, <cit.>), Biot (1965, <cit.>), Feynman, Leighton, and Sands (1965, <cit.>) (using a wedge instead of a tetrahedron), Borg (1966, <cit.>), Calcote (1968, <cit.>), Flügge (1972, <cit.>), Arfken (1985, <cit.>), Brekhovskikh and Goncharov (1994, <cit.>), Salencon (2001, <cit.>), Kundu, Cohen, and Dowling (2012, <cit.>), and Chaves (2013, <cit.>).Let us see what is presented by Truesdell and Toupin in the very important book “The Classical Field Theories, pp. 542-543” (1960, <cit.>):…Let the altitude of the tetrahedron be h;the area of the inclined face [Δ s_4],…We may then estimate the volume integrals in (200.1) [the integral equation of conservation of linear momentum] and apply the theorem of mean value to the surface integral: Δ s_4 (n_1 t_1^*+n_2 t_2^*+n_3 t_3^*+t_(n)^*)+h Δ s_4 K=0,(203.1)where K is a bound and where t_(n)^* [traction vector on Δ s_4] and t_a^* [t_1^*, t_2^*, and t_3^*] are the stress vectors at certain points upon the outsides of the respective faces. We cancel Δ s_4 and let h tend to zero, so obtaining t_(n) = -(t_1 n_1+t_2 n_2+t_3 n_3),(203.2) where all stress vectors are evaluated at the vertex of the tetrahedron.So, the expressions “then estimate the volume integrals”, “apply the theorem of mean value to the surface integral”, and “let h tend to zero” show the presented challenges in the before section. Here in the last line “where all stress vectors are evaluated at the vertex of the tetrahedron” is not exactly obtained and is only an approximate result by this process, because t_(n) is defined on the base surface of the tetrahedron (Δ s_4) and this surface does not pass exactly through the vertex of the tetrahedron even when h tends to zero. In the book “Introduction to the Mechanics of a Continuous Medium” (1969, <cit.>) by Malvern, on pages 73-76:…Imagine …a tetrahedron or triangular pyramid bound by parts of the three coordinate planes through O and a fourth plane ABC not passing through O, ……The asterisks indicate average values; thus b^* is the average value of the body force per unit mass in the tetrahedron. t^(n)* is the average value of the surface traction per unit area on the oblique face; ……then the altitude h will be allowed to approach zero so that the volume and the four surface areas simultaneously approach zero, while the orientation of ON and the position of O do not change. We postulate the continuity of all the components of the stress vectors and the body force and the density as functions of position; it follows that the average values will approach the local values at the point O, and the result will be an expression for the traction vector t^(n)at the point O in the terms of the three special surface stress vectors t^(k) at O … t^(n)*Δ S + ρ^* b^* Δ V - t^(1)*Δ S_1 - t^(2)*Δ S_2 - t^(3)*Δ S_3 = ρ^*Δ V dv^*/dt. …dividing through by Δ S, and rearranging terms we obtain t^(n)* + 1/3hρ^* b^* = t^(1)* n_1 +t^(2)* n_2 + t^(3)* n_3 + 1/3h ρ^* dv^*/dt. We now let h approach zero. The last term in each member then approaches zero, while the vectors in the other terms approach the vectors at the point O as is indicated by dropping the asterisks. The result is in the limit t^(n) = t^(1) n_1 +t^(2) n_2 + t^(3) n_3 =t^(k)n_k.(3.2.7)This important equation permits us to determine the traction t^(n) at a point, acting on an arbitrary plane through the point, when we know the tractions on only three mutually perpendicular planes through the point. Note that this result was obtained without any assumption of equilibrium. It applies just as well in fluid dynamics as in solid mechanics.This proof is similar to the presented tetrahedron argument for introducing the Cauchy tetrahedron argument in the previous section. So, all of the stated challenges hold in this proof. For example, “plane ABC not passing through O”, “asterisks indicate average values”, “the average values will approach the local values at the point O”, and “let h approach zero”. Note that the postulate in the last paragraph is not exact but as Love has been told <cit.>, is by a first approximation.Tetrahedron arguments in many books are nearly similar to the presented proofs by Truesdell and Toupin (1960, <cit.>) and Malvern (1969, <cit.>), for example Jaunzemis (1967, <cit.>), Ilyushin and Lensky (1967, <cit.>), Rivlin (1969, <cit.>), Wang (1979, <cit.>), Eringen (1980, <cit.>), Narasimhan (1993, <cit.>), Chandrasekharaiah and Debnath (1994, <cit.>), Shames and Cozzarelli (1997, <cit.>), Mase (1999, <cit.>), Kiselev, Vorozhtsov, and Fomin (1999, <cit.>), Batchelor (2000, <cit.>), Basar and Weichert (2000, <cit.>), Guyon, Hulin, Petit, and Mitescu (2001, <cit.>), Haupt (2002, <cit.>), Talpaert (2002, <cit.>), Jog (2002, <cit.>), Spencer (2004, <cit.>), Hutter and Jöhnk (2004, <cit.>), Han-Chin (2005, <cit.>), Antman (2005, <cit.>), Batra (2006, <cit.>), Dill (2007, <cit.>), Graebel (2007, <cit.>), Irgens (2008, <cit.>), Bonet and Wood (2008, <cit.>), Nair (2009, <cit.>), Wegner and Haddow (2009, <cit.>), Lai, Rubin, and Krempl (2010, <cit.>), Epstein (2010, <cit.>), Slawinski (2010, <cit.>), Reddy (2010, <cit.>), Lautrup (2011, <cit.>), Dimitrienko (2011, <cit.>), Capaldi (2012, <cit.>), Byskov (2013, <cit.>), Rudnicki (2015, <cit.>), and others.In the book “Introduction to the Mechanics of a Continuous Medium” (1965, <cit.>) by Sedov, on pages 130-131:…Consider the volume V as an infinitesimal tetrahedron …with faces MCB, MAB, and MAC perpendicular to the coordinate axes and with face ABC arbitrarily determined by an externally directed unit normal vector …The stresses on the areas with the normals ∋_1, ∋_2, ∋_3, and n are denoted by p^1, p^2, p^3, and p_n, respectively.…In fact, applying (4.7) [the integral equation of conservation of linear momentum] to the masses of the volume that are inside the infinitesimal tetrahedron MABC at the instant in question, we obtain (ρa-ρF).1/3Sh =(-p^1 S cos (n∋_1) -p^2 S cos (n∋_2)-p^3 S cos (n∋_3)+p_n .S )+S.O(h), where S is the area of the bounding surface ABC [Δ s_4], and h is the infinitesimal height of the tetrahedron; O(h), is a quantity which tends to zero for h → 0. Approaching the limit, as h → 0, we obtainp_n=p^1 cos (n∋_1) +p^2 cos (n∋_2)+p^3 cos (n∋_3)(4.10) In this book, we see “O(h)” that represents the first order approximation. In addition, the “infinitesimal height of the tetrahedron” and “tends to zero for h → 0” show that this proof, similar to earlier books, holds only for an infinitesimal tetrahedron. Nearly the same process is given in the other Sedov's book (1971, <cit.>).In the book “Theoretical Elasticity” (1968, <cit.>) by Green and Zerna, on page 70: ∫_τρ (F_i-ω̇_̇i̇) dτ + ∫_A t_i dA =0,(2.7.7)…We consider a tetrahedron element bounded by the coordinate planes at the point y_i and a plane whose unit normal is n_k measured from inside to outside of the tetrahedron. If we apply (2.7.7) to this tetrahedron and take the limit as the tetrahedron tends to zero with n_k being unaltered we have t_i =n_k σ_ki,(2.7.9) Provided the contributions from the volume integrals may be neglected compared with the surface integrals, in the limit.So, the challenges related to the “tetrahedron tends to zero”, “volume integrals may be neglected compared with the surface integrals, in the limit”, and definition of traction vectors on the surfaces that do not pass through the same point remain.A more general proof is provided by Gurtin and his coauthors <cit.>. Here it is represented from the book “The Mechanics and Thermodynamics of Continua” (2010, <cit.>) by Gurtin, Fried, and Anand. On pages 137-138:A deep result central to all of continuum mechanics is …Cauchy's theorem … t(a,x)=-∑_i=1^3 (a.e_i)t(-e_i,x)(19.24)PROOF. Let x belong to the interior of ℬ_t. Choose δ>0 and consider the (spatial) tetrahedron Γ_δ with the following properties: The faces of Γ_δ are S_δ, S_1δ, S_2δ, and S_3δ, where a and -e_i are the outward unit normals on S_δ and S_iδ, respectively; the vertex opposite to S_δ is x; the distance from x to S_δ is δ. Then, Γ_δ is contained in the interior of ℬ_t for all sufficiently small δ, say δ≤δ_0.Next, if we assume that b [generalized body term including the inertia and body force] is continuous, then b is bounded on Γ_δ. If we apply the force balance (19.16) to the material region P occupying the region ΓΓ_δ in the deformed region at time t, we are then led to the estimate |∫_∂Γ_δt(n) da |= |∫_Γ_δbdv | ≤ k vol(Γ_δ)(19.25)for all δ≤δ_0, where k is independent of δ.Let A(δ) denote the area of S_δ. Since A(δ)) is proportional to δ^2, while vol(Γ_δ) is proportional to δ^3, we may conclude from (19.25) that 1/A(δ)∫_∂Γ_δt(n) da →0 as δ→ 0. But ∫_∂Γ_δt(n) da = ∫_S_δt(a) da + ∑_i=1^3 ∫_S_iδt(-e) daand, assuming that t(n,x) is continuous in x for each n, since the area of S_iδ is A(δ)(a.e_i), 1/A(δ)∫_∂ S_δt(a) da →t(a,x) and 1/A(δ)∫_∂ S_iδt(-e_i) da → (a.e_i)t(-e_i,x).Combining the relations above we conclude that (19.24) is satisfied. This proof is based on the infinitesimal volume, and in the limit δ→ 0 the traction vector on the base surface of tetrahedron is regarded as the traction vector on the inclined surface that passes through the vertex point of tetrahedron. The process that leads to 1/A(δ)∫_∂Γ_δt(n) da →0is similar to the Love's proof that was discussed before in detail. Therefore, some of the challenges remain. The proofs in some books are nearly similar to this process, for example Ciarlet (1988, <cit.>), Smith (1993, <cit.>), Huilgol and Phan-Thien (1997, <cit.>), Atkin and Fox (2005, <cit.>), Oden (2011, <cit.>), Bechtel and Lowe (2015, <cit.>). There is a new proof in the literature that is introduced by this statement: “This proof was furnished by W. Noll (private communication) in 1967.”, in the chapter “The Linear Theory of Elasticity” by Gurtin in the book <cit.>. Then this proof was presented in the two other books by Truesdell (1997, <cit.>) and Liu (2002, <cit.>). This proof is based on the properties of a linear transformation on vector space <cit.>. In the book by Leigh (1968, <cit.>), it is stated that if a transformation such as T on a vector space has the following properties then it is usually called a “linear transformation or tensor”. On page 28 of this book <cit.>:…linear transformation T …is defined by (a) T(u+v)= T(u)+T(v)(b) T(αv)= αT(v)(2.8.1) Thus in the Noll's proof, it is tried to prove these properties for the traction vectors. These properties must be derived using the integral equation of conservation of linear momentum. This proof is nearly the same in the three books that are presented it <cit.>. Here we represent it from the first book <cit.>. On pages 48-49:…for any x∈ B we can extend the function s(x,.) to all of V as follows: s(x,v)=|v|s(x,v/|v|) x≠0,s(x,0)=0. (a) Let α be a scalar. If α > 0, thens(αv)=|αv|s(αv/|αv|)= α|v|s(v/|v|)= αs(v),(b) where we have omitted the argument x. If α < 0, then (b) and Cauchy's reciprocal theorem (2) [s(n)= -s(-n)] yield s(αv)= s(|α|(-v))= |α|s(-v)=αs(v). Thus s(x,.) is homogeneous.To show that s(x,.) is additive we first note that s(x,w_1+w_2)= s(x,w_1) + s(x,w_2)whenever w_1 and w_2 are linearly dependent. Suppose then that w_1 and w_2 are linearly independent. Fix ϵ > 0 and consider π_1, the plane through x_0 with normal w_1; π_2, the plane through x_0 with normal w_2; and π_3, the plane through x_0+ϵw_3 with normal w_3, where w_3=-(w_1+w_2).(c)< g r a p h i c s > Consider the solid 𝒜=𝒜(ϵ) bounded by these three planes and two planes parallel to both w_1 and w_2 and a distance δ from x_0 (see Fig. 7). Let ϵ and δ be sufficiently small that 𝒜 is a part of B. Then ∂𝒜=⋃^5_i=1𝒲_i,where 𝒲_i, is contained in π_i (i =1,2,3), and 𝒲_4 and 𝒲_5 are parallel faces. Moreover, a_i = |w_i|/|w_3|a_3 (i=1, 2), a_3 = O(ϵ)asϵ→ 0,υ (𝒜)=ϵ/2|w_3| a_3 = 2 δ a_4=2 δ a_5, where a_i, is the area of 𝒲_i. Thus, by the continuity of s_n, c≡|w_3|/a_3∫_∂𝒜s_nda = ∑^3_i=1|w_i|/a_i∫_𝒲_is(x,w_i/|w_i|) da_x + O(ϵ)as ϵ→ 0,and (a) implies c= ∑^3_i=1s(x_0,w_i)+ o(1)as ϵ→ 0. On the other hand, we conclude from estimate (a) [ |∫_∂ Ps_nda |≤ k υ (P) whereυ (P) is the volume of P] in the proof of (2) [s(n)= -s(-n)] that c=O(ϵ)as ϵ→ 0.The last two results yield∑^3_i=1s(x_0,w_i) =0;since s(x,.) is homogeneous, this relation and (c) imply that s(x_0,.) is additive. Thus s(x_0,.) is linear, and, since x_0 ∈ B was arbitrarily chosen, Noll's proof is complete.This is a creative proof by Noll that shows a new insight to the mathematical aspects of traction vectors. But this proof is based on the limited volume and holds for the infinitesimal mass element. The expression “ϵ→ 0” shows this. Also, the average values of the traction vectors are used and the traction vector on the surface π_3 is regarded as the traction vector on the surface that is parallel to π_3 and passes through point x_0 in the limit. So, some of the challenges remain.Leigh in the book “Nonlinear Continuum Mechanics” (1968, <cit.>) uses the properties of linear transformation to prove the existence of stress tensor by a different construction that is used in the Noll's proof. On pages 129-130:t=f(x,n)(7.5.5)∫_∂χtda + ∫_χbρdv=∫_χẍρdv(7.5.6) Next we prove Cauchy's fundamental theorem for the stresst=f(x,n)=T(x)n (7.5.7) < g r a p h i c s >that is, the stress vector t at x acting on the surface with direction n is a linear transformation of n. The linear transformation or tensor T is called the stress tensor. Consider the elemental tetrahedron of Fig. 7.5.2 …The values of the stress vectors on the faces are given by (7.5.5), where we use the same x, since we are going to allow the tetrahedron to shrink to the point x in the limit. Thus we havet=f(x,n) t_i=f(x,n_i)(7.5.8)Thus applying (7.5.6) to the elemental tetrahedron in the limit as A,A_i → 0, we note that volume integrals are negligible compared with the surface integrals. The surface integral yieldst= -1/A (A_1t_1+A_2t_2+A_3t_3)(7.5.9) Now a closed surface S satisfies the condition ∫_Snda = 0 (7.5.10) Applying (7.5.10) to our elemental tetrahedron, we get n= -1/A (A_1n_1+A_2n_2+A_3n_3)(7.5.11)Combining (7.5.8), (7.5.9), and (7.5.11), we have, suppressing x, f(-1/A A_in_i ) = -1/A A_if(n_i)(7.5.12)and we see that f(n) satisfies the definition (2.8.1) of a linear transformation [two properties for linear transformation that we presented them before the Noll's proof], which proves (7.5.7).In this proof, Leigh uses three linearly independent traction vectors rather than two linearly traction vectors as used in the Noll's proof. As compared with the previous proofs that use a tetrahedron element with three orthogonal faces, in the Leigh's proof it is not needed the faces be orthogonal. But as previous proofs, this proof is based on the infinitesimal volume and is the sequence of the limit A → 0. Here the “volume integrals are negligible compared with the surface integrals” shows the challenge 4, so some of the challenges remain. The proof in the book by Lurie (2005, <cit.>) is similar to this proof. §.§ The second approachDuring the comprehensive review of a large number of books on continuum mechanics and the relevant subjects, we found that there are two general approaches to the tetrahedron arguments and the proofs of the existence of stress tensor. In the first approach, the traction vectors and body terms are not defined at the same point. In fact, the traction vector on the base surface of the infinitesimal tetrahedron (Δ s_4) is regarded as the traction vector on the inclined surface that is parallel to Δ s_4 and passes through the vortex point of the tetrahedron. So, the challenges on the equation t_4+n_xt_1+n_yt_2+n_zt_3=0 and most of the other stated challenges hold. Almost all the proofs in the previous subsection can be regarded in the first approach. Most of the tetrahedron arguments and the proofs of the existence of stress tensor are based on the first approach.But in the second approach, the traction vectors and body terms are explicitly defined at the same point, e.g. in the tetrahedron arguments the vortex point (o). Then by an approximate process for infinitesimal tetrahedron the equation t_4_o+n_xt_1_o+n_yt_2_o+n_zt_3_o=0 is obtained. So, in the second approach all the traction vectors in this equation are exactly defined at the same point (o) on different surfaces that pass exactly through this point. Here some of the challenges, for example challenges 6, 7, and 8 that are related to the definition of traction vectors at different points in the equation t_4+n_xt_1+n_yt_2+n_zt_3=0 are removed. But these proofs are based on the approximate process and are limited to infinitesimal tetrahedron, so the other relevant challenges remain. A few of scientists and authors in continuum mechanics followed this approach. They are Muskhelishvili 1933 (English translation 1977, <cit.>), Sokolnikoff (1946, <cit.>), Fung (1965, <cit.> and 1969, <cit.>), Godunov and Romenskii (1998, <cit.>), and Temam and Miranvilli (2000, <cit.>). The proofs that are presented in all these books are nearly similar. Here, we present Muskhelishvili's proof and Fung's proof as two samples of these books. In the book “Some Basic Problems of the Mathematical Theory of Elasticity, 1933” by Muskhelishvili on pages 8-10 from the English translation, (1977, <cit.>):Through the point M draw three planes, parallel to the coordinate planes, and in addition, another plane having the normal n and lying a distance h from M. These four planes form a tetrahedron, three faces of which are parallel to the coordinate planes, while the fourth ABC [Δ s_4] is the face to be considered. …the transition to the limit h→ 0 the size of the tetrahedron will be assumed infinitely small.[here (X_x, Y_x, Z_x), (X_y, Y_y, Z_y), (X_z, Y_z, Z_z), and (X_n, Y_n, Z_n) are the components of traction vectors at the point M on the four surfaces with unit normal vectors e_x, e_y, e_z, and n, respectively. X, Y, Z are the components of body terms at the point M.]…The projection of the body force equals (X+ϵ)dV, where dV is the volume of the tetrahedron. The value X refers to the point M and ϵ is an infinitely small quantity …Further, the projection of the tractions, acting on the face ABC is (X_n+ϵ')σ where σ denotes the area of the triangle ABC [Δ s_4] and ϵ' is again infinitely small; X_n, Y_n, Z_n, as will be remembered, are the components of the stress vector acting on the plane through M with normal n.Finally the projection of the external forces acting on MBC, normal to Ox, is (- X_x + ϵ_1) σ_1 where σ_1 is the area of MBC. …For the sides MCA and MAB one obtains similarly (-X_y+ϵ_2) σ_2 and (-X_z+ϵ_3) σ_3 respectively. Here ϵ_1, ϵ_2 and ϵ_3 denote again infinitesimal quantities. [So, the conservation of linear momentum in x direction is:](X+ϵ)1/3hσ + (X_n+ϵ')σ + (- X_x + ϵ_1) σcos (n,x) +(- X_y + ϵ_2) σcos (n,y) + (- X_z + ϵ_3) σcos (n,z)= 0. Dividing by σ and taking the limit h→ 0 one obtains the following formulae …[similarly in y and z directions]: X_n = X_x cos (n,x) + X_y cos (n,y) + X_z cos (n,z) Y_n = Y_x cos (n,x) + Y_y cos (n,y) + Y_z cos (n,z) Z_n = Z_x cos (n,x) + Z_y cos (n,y) + Z_z cos (n,z)So, the traction vector on the inclined surface that passes exactly through point M is obtained by an approximate process and by “taking the limit h→ 0”. In the book “A First Course in Continuum Mechanics, 1969” by Fung, on pages 69-71 of the third edition, (1994, <cit.>):Let us consider an infinitesimal tetrahedron formed by three surfaces parallel to the coordinate planes and one normal to the unit vector v. Let the area of the surface normal to v be dS. …The forces in the positive direction of x_1, acting on the three coordinate surfaces, can be written as (-τ_11 + ϵ_1)dS_1,(-τ_21 + ϵ_2)dS_2,(-τ_31 + ϵ_3)dS_3,where τ_11, τ_21, τ_31 are the stresses at the vertex P opposite to dS. The negative signis obtained because the outer normals to the three surfaces are opposite in sense with respect to the coordinate axes, and the ϵ's are inserted because the tractions act at points slightly different from P. If we assume that the stress field is continuous, then ϵ_1, ϵ_2, ϵ_3 are infinitesimal quantities. On the other hand, the force acting on the triangle normal to v has a component (T_1 + ϵ)dS in the positive x_1-axis direction, the body force has an x_1-component equal to (X_1 + ϵ')dv, and the rate of change of linear momentum has a component ρV̇_1 dv, where V̇_1, is the component of acceleration in the direction of x_1. Here, T_1 and X_1 refer to the point P, and ϵ and ϵ' are again infinitesimal. The first equation of motion is thus (-τ_11 + ϵ_1)v_1dS + (-τ_21 + ϵ_2)v_2dS +(-τ_31 + ϵ_3)v_3dS + (T_1 + ϵ)dS + (X_1 + ϵ')1/3hdS = ρV̇_1 1/3hdS.(3.3-3)Dividing through by dS, taking the limit as h → 0, and noting that ϵ_1, ϵ_2, ϵ_3, ϵ, ϵ' vanish with h and dS, one obtains T_1=τ_11v_1 + τ_21v_2 + τ_31v_3,(3.3-4) Fung has also discussed the error of this approximate process. On page 71, <cit.>:Checking Acceptable Errors …We claimed that the sum of the terms ϵ_1v_1 + ϵ_2v_2 + ϵ_3v_3+ ϵ + 1/3(ϵ'-ρV̇_1)(3.3-5)is small, compared with the terms that are retained; i.e.,T_1,τ_11v_1,τ_21v_2,τ_31v_3, (3.3-6)when we take Eq. (3.3-3) to the limit as h → 0 and Δ S → 0. Now, if we are not allowed to take the limit as h → 0 and Δ S → 0, but instead we are restricted to accept h no smaller than a constant h^* and Δ S no smaller than a constant multiplied by (h^*)^2, then the quantity listed in line (3.3-5) must be evaluated for h = h^* and Δ S = const_..(h^*)^2 and must be compared with the quantities listed in line (3.3-6). A standard of how small is negligible must be defined, and the comparison be made under that definition. If we find the quantity in line (3.3-5) negligible compared with those listed in line (3.3-6), then we can say that Eq. (3.3-3) or Eq. (3.3-2) [T_i=v_jτ_ji] is valid. This tedious step should be done, in principle, to apply the continuum theory to objects of the real world.§.§ Advanced mathematical worksIn recent decades, some proofs of the existence of Cauchy stress tensor or general Cauchy fluxes are presented in the geometrical language mathematics and advanced analysis. For example, using variational method <cit.>, considering general Cauchy fluxes under weaker conditions <cit.>, representing by measures <cit.>, considering contact interactions as maps on pairs of subbodies and the possibility of handling singularities due to shocks and fracture <cit.>, considering contact actions in N-th gradient generalized continua <cit.>, etc. Each of these articles shows some aspects of the contact interactions in continuum physics. Here considering these attempts is outside the scope of this article that is based on the review of the proofs of the existence of stress tensor and their challenges in continuum mechanics and the relevant subjects. § THE WORK OF HAMEL, ITS IMPROVEMENTS AND CHALLENGESLet us see what is presented by Hamel in the famous book “Theoretische Mechanik, pp. 513-514” (1949, <cit.>). We present this proof completely:Dann soll (I) nach Division mit dV die genauere Form ϱω=∑χ + lim_Δ V → 01/Δ V∮σ_ndF (I_A)bekommen und dieser Grenzwert existieren. Das Integral eistreckt sich über die Oberfläche des kleinen Volumens um den betrachteten Punkt, gegen den Δ V konvergiert.Aus der Existenz des Grenzwertes folgen die Sätze: 1)σ_n=σ_x cos (n,x)+σ_y cos (n,y)+σ_z cos (n,z)σ_x usw. bedeuten die Spannungen an Flächenelementen, deren äußere Normalen Parallelen zur x, y, z-Achse sind. Setzt manσ_x = X_x i +Y_x j+Z_x k, σ_y = X_y i +Y_y j+Z_y k, σ_z = X_z i +Y_z j+Z_z kmit i, j, k als Einheitsvektoren in den drei Achsenrichtungen, so erscheint hier der SpannimgstensorX_x Y_x Z_x X_y Y_y Z_y X_z Y_z Z_z ,und man kann 1) auch schreiben:σ_n = σ n wennn = icos (n,x) + jcos (n,y) + kcos (n,z)den Einheitsvektor der äußeren Normalen angibt. (An der gedachten Fläche wird also die Existenz einer solchen im allgemeinen vorausgesetzt.)1a) In 1) ist insbesondere enthaltenσ_-n = -σ_n,d. h. das Gegenwirkungsprinzip für die inneren Spannungen, das also hier beweisbar ist.2) Die Ausführung des Grenzüberganges in (I_A) ergibt ϱω=∑χ + ∂σ_x/∂ x + ∂σ_y/∂ y + ∂σ_z/∂ zHamel's proof is based on the existence of the limit in the conservation of linear momentum equation (I_A). This is the best part of this proof and the main improvement of his work. The original difference of this stage from the other previous similar works is that they divided the equation by Δ S that leads to the trivial solution. Because as indicated previously, the two parts (surface and volume integrals) of the equation have the same order of magnitude, i.e., l^3, and by dividing by Δ S they still have the same order of magnitude, i.e., l. So, they go to zero by the same rate when the element goes to infinitesimal volume, and this is a trivial result. But Hamel divided the equation by Δ V and this leads to the logical result of the existence the limit in the equation (I_A). Therefore, some of the important challenges are removed by Hamel's proof.But this proof is limited to Δ V → 0 and there is no statement for a mass element with any volume size in continuum media. Because we must prove that the existence of stress tensor does not depend on the volume size of the considered mass element. So, the challenge 2 remains. In addition, there is no process to show how the equation σ_n=σ_x cos (n,x)+σ_y cos (n,y)+σ_z cos (n,z) is obtained from the existence of the limit in equation (I_A). This will be an important step for the existence of stress tensor.§ THE WORK OF BACKUS, ITS IMPROVEMENTS AND CHALLENGESNow let us see the Backus's proof from the book “Continuum Mechanics” (1997, <cit.>). Unfortunately, the Backus's work seems to have attracted no attention of the scientists and authors in continuum mechanics, so far. However, this proof removes most of the challenges. First, we represent some notations according to this book. On page 163:…(P, A_P) is oriented real physical space. …The open set in P occupied by the particles at time t will be written K(t), and the open subset of K(t) consisting of the particles …will be written K'(t).On pages 171-172:…S⃗(r⃗,t,n̂_P)=S⃗_force(r⃗,t,n̂_P)+S⃗_mfp(r⃗,t,n̂_P) is called the stress on the surface (S,n̂_P). The total force exerted by the material just in front ofdA_P(r⃗) on the material just behind dA_P(r⃗) is dℱ⃗_S (r⃗)= dA_P(r⃗)S⃗(r⃗,t,n̂_P).(13.2.7) This is called the surface force on dA_P(r⃗).……Combining the physical law (13.2.1) with the mathematical expressions (13.2.3) and (13.2.9) gives ∫_K' dV_P(r⃗)(ρa⃗-f⃗)^E (r⃗,t)= ∫_∂ K' dA_P(r⃗)S⃗(r⃗,t,n̂_P(r⃗)).(13.2.10) where K'=K'(t) and n̂_P(r⃗) is the unit outward normal to ∂ K' at r⃗∈∂ K'. In the following paragraphs, Backus has discussed some challenges. These are some aspects of the improvements of this work. On pages 172-173: To convert (13.2.10) to a local equation, valid for all r⃗∈ K(t) at all times t, (i.e., to “remove the integral signs”) we would like to invoke the vanishing integral theorem, …The surface integral in (13.2.10) prevents this. Even worse, (13.2.10) makes our model look mathematically self-contradictory, or internally inconsistent. Suppose that K' shrinks to a point while preserving its shape. Let λ be a typical linear dimension of K'. Then the left side of (13.2.10) seems to go to zero like λ^3, while the right side goes to zero like λ^2. How can they be equal for all λ >0? Cauchy resolved the apparent contradiction in 1827. He argued that the right side of (13.2.10) can be expanded in a power series in λ, and the validity of (13.2.10) for all λ shows that the first term in this power series, the λ^2 term, must vanish. In modern language, Cauchy showed that this can happened iff at every instant t, at every r⃗∈ K(t), there is a unique tensor S^E(r⃗,t) …such that for each unit vector n̂ …, S⃗(r⃗,t,n̂)=n̂.S^E(r⃗,t).(13.2.11)…The physical quantity S is also called the Cauchy stress tensor.Then on page 173 the Cauchy's theorem of the existence of stress tensor is stated: The argument which led Cauchy from (13.2.10) to (13.2.11) is fundamental to continuum mechanics, so we examine it in detail.…Theorem 13.2.28 (Cauchy's Theorem) …Suppose that for any open subset K' of K whoseboundary ∂ K' is piecewise smooth, we have ∫_K' dV_U(r⃗)f⃗(r⃗)= ∫_∂ K' dA_U(r⃗)S⃗(r⃗,n̂_U(r⃗)),(13.2.13)…Then for each r⃗∈ K there is a unique S(r⃗) …such that for all n̂ …, S⃗(r⃗,n̂)=n̂.S(r⃗).(13.2.14)Backus uses two lemmas to prove the “Cauchy's Theorem”. The first lemma on pages 174-176:Two lemmas are required. [The first lemma:]Lemma 13.2.29 Suppose f⃗ and S⃗ satisfy the hypotheses of theorem 13.2.28. Let r⃗_0 be any point in K and let K' be any open bounded (i.e., there is a real M such thatr⃗∈ K' ⇒r⃗≤ M) subset of U, with piecewise smooth boundary ∂ K'. We don't need K' ⊆ K. Then ∫_∂ K' dA_U(r⃗)S⃗(r⃗_0,n̂_U(r⃗))=0⃗_V,(13.2.15)if n̂_U(r⃗) is the unit outward normal to ∂ K' at r⃗∈∂ K' and dA_U(r⃗) is the element of area on ∂ K'.Proof of Lemma 13.2.29: For any real λ in 0 < λ < 1, define r⃗_λ : U → U by r⃗_λ (r⃗)= r⃗_0 + λ (r⃗-r⃗_0) for all r⃗∈ U. Since r⃗_λ(r⃗) - r⃗_0 = λ(r⃗-r⃗_0), r⃗_λ shrinks U uniformly toward r⃗_0 by the factor λ. The diagram above is for λ≈ 1/2. Define K'_λ = r⃗_λ(K') so ∂ K'_λ = r⃗_λ(∂ K'). Choose r⃗∈∂ K' and let r⃗_λ= r⃗_λ(r⃗). Let dA(r⃗) denote a small nearly plane patch of surface in ∂ K', with r⃗∈ dA(r⃗), and usedA(r⃗) both as the name of this set and as the numerical value of its area. Let the set dA_λ(r⃗_λ) be defined as r⃗_λ (dA(r⃗)), and denote its area also by dA_λ(r⃗_λ). Then by geometric similarity dA_λ(r⃗_λ) = λ^2 dA(r⃗).(13.2.16)Let n̂(r⃗) be the unit outward normal to ∂ K' at r⃗, and let n̂_λ(r⃗_λ) be the unit outward normal to ∂ K'_λ at r⃗_λ. By similarity, n̂(r⃗) and n̂_λ(r⃗_λ) point in the same direction. Being unit vectors, they are equal: n̂_λ(r⃗_λ) = n̂(r⃗).(13.2.17) Since r⃗_0 is fixed, it follows that∫_∂ K'_λ dA_λ(r⃗_λ)S⃗(r⃗_0,n̂_λ(r⃗_λ)) = λ^2 ∫_∂ K' dA(r⃗)S⃗(r⃗_0,n̂(r⃗)) ,(13.2.18) If λ is small enough, K' ⊆ K. Then, by hypothesis, we have (13.2.13) with K' and∂ K' replaced by K'_λ and ∂ K'_λ. Therefore[In the first integral on the right hand side, ∂ is missed in the original book.] ∫_K'_λ dV(r⃗)f⃗(r⃗)= ∫_∂ K'_λ dA_λ(r⃗_λ) {S⃗(r⃗_λ,n̂_λ(r⃗_λ)) - S⃗(r⃗_0,n̂_λ(r⃗_λ)) }+ ∫_∂ K'_λ dA_λ(r⃗_λ) S⃗(r⃗_0,n̂_λ(r⃗_λ)). From (13.2.18) it follows that ∫_∂ K' dA(r⃗)S⃗(r⃗_0,n̂(r⃗))= 1/λ^2∫_K'_λ dV(r⃗)f⃗(r⃗) + 1/λ^2∫_∂ K'_λ dA_λ(r⃗_λ) {S⃗(r⃗_λ,n̂_λ(r⃗_λ)) - S⃗(r⃗_0,n̂_λ(r⃗_λ)) }.(13.2.19)Let m_S⃗(λ)= maximum value of S⃗(r⃗_0, n̂)-S⃗(r⃗, n̂) for all r⃗∈∂ K'_λ and all n̂∈ N_U.Let m_f⃗(λ)= maximum value of |f⃗(r⃗)| for all r⃗∈ K'_λ. Let |∂ K'_λ|= area of ∂ K'_λ, |∂ K'|= area of ∂ K'. Let |K'_λ|= volume of K'_λ, |K'|= volume of K'.Then |∂ K'_λ|= λ^2 |∂ K'| and |K'_λ|= λ^3 |K'|, so (10.2.3) and (13.2.19) imply ∫_∂ K' dA(r⃗)S⃗(r⃗_0,n̂(r⃗)) ≤λ |K'|m_f⃗(λ) + |∂ K'|m_S⃗(λ).(13.2.20)As λ→ 0, m_f⃗(λ) remains bounded (in fact →f(r⃗_0)) and m_S⃗(λ)→ 0 because S⃗: K× N_U → V is continuous. Therefore, as λ→ 0, the right side of (13.2.20) → 0. Inequality (13.2.20) is true for all sufficiently small λ > 0, and the left side is non-negative and independent of λ. Therefore the left side must be 0. This proves (13.2.15) and hence proves lemma 13.2.29.So, the result of this lemma is the fundamental equation (13.2.15) for traction vectors at the given point r⃗_0, as the following: ∫_∂ K' dA_U(r⃗)S⃗(r⃗_0,n̂_U(r⃗))=0⃗_VIf we compare this equation with the presented similar equations in the previous sections, the lemma 13.2.29 and its proof are the improved achievements by Backus. Because: * This equation is obtained by an exact process, not by an approximate process.* This equation is exactly valid not only for an infinitesimal volume where the volume of K' tends to zero but also for any volume of K' in continuum media.* In this integral equation the position vector is fixed at the point r⃗_0, so the stress vector changes only by changing the unit normal vector on the surface of the mass element at a given time. This is the key character that leads to the exact validation of this equation for any volume of mass element in continuum media. In the former proofs, stress vector changes by changing both the position vector r⃗ and the unit normal vector on surface of the mass element at a given time and this leads to the approximate proofs for only the mass elements with infinitesimal volumes.Backus uses a second lemma to prove the existence of stress tensor based on the equation (13.2.15). On pages 176-180: We also need [The second lemma:]Lemma 13.2.30 Suppose S⃗: N_U → V. Suppose that for any open set K' with piecewisesmooth boundary ∂ K', S⃗ satisfies ∫_∂ K' dA(r⃗)S⃗(n̂(r⃗))=0⃗_V(13.2.21) where n̂(r⃗) is the unit outward normal to ∂ K' as r⃗∈∂ K'. Suppose F : U → V is defined as follows: F(0⃗_U)=0⃗_V and if u⃗≠0⃗_U, F(u⃗)=u⃗S⃗(u⃗/u⃗).(13.2.22) Then F is linear. Proof of Lemma 13.2.30: a) F(—u⃗) = —F(u⃗) for all u⃗∈ U. To prove this, it suffices to proveS⃗(-n̂)=-S⃗(n̂)foralln̂∈ N_U.(13.2.23) Let K' be the flat rectangular box shown at upper right. For this box, (13.2.21)gives L^2S⃗(n̂)+L^2S⃗(-n̂)+ ϵ L (S⃗(n̂_1)+S⃗(-n̂_1)+S⃗(n̂_2)+S⃗(-n̂_2))=0⃗_V. Hold L fixed and let ϵ→ 0. Then divide by L^2 and (13.2.23) is the result.< g r a p h i c s >b) If c ∈ℛ and u⃗∈ U, F(cu⃗) = cF(u⃗).i) If c = 0 or u⃗ = 0⃗_U, this is obvious from F(0⃗_U) = 0⃗_V.ii) If c > 0 and u⃗≠0⃗_U, F(cu⃗)=cu⃗S⃗(cu⃗/cu⃗) = cu⃗S⃗(cu⃗/cu⃗) = cu⃗S⃗(u⃗/u⃗) = cF(u⃗). iii) If c < 0, F(cu⃗)=-F(-cu⃗) by a) above. But -c > 0 so F(-cu⃗)=-cF(u⃗) by ii). Then F(cu⃗)=-(-c)F(u⃗)=cF(u⃗). c) F(u⃗_1+u⃗_2)=F(u⃗_1)+F(u⃗_2) for all u⃗_1, u⃗_2 ∈ U.i) If u⃗_1 = 0⃗_U, F(u⃗_1+u⃗_2)= F(u⃗_2)= 0⃗_V+F(u⃗_2)= F(u⃗_1)+F(u⃗_2). ii) If u⃗_1 ≠0⃗_U and u⃗_2=cu⃗_1 then F(u⃗_1+u⃗_2)=F((1+c)u⃗_1)=(1+c)F(u⃗_1)=F(u⃗_1)+cF(u⃗_1) =F(u⃗_1)+F(cu⃗_1)=F(u⃗_1)+F(u⃗_2). iii) If {u⃗_1,u⃗_2} is linearly independent, let u⃗_3=-u⃗_1-u⃗_2. We want to prove F(-u⃗_3)=F(u⃗_1)+F(u⃗_2), or -F(u⃗_3)=F(u⃗_1)+F(u⃗_2), orF(u⃗_1)+F(u⃗_2)+F(u⃗_3)=0⃗_V.(13.2.24)< g r a p h i c s > To prove (13.2.24) note that since u⃗_1, u⃗_2 are linearly independent, we can define the unit vector ν̂=(u⃗_1 ×u⃗_2)/u⃗_1 ×u⃗_2.We place the plane of this paper so that it contains u⃗_1 and u⃗_2, and ν̂ points out of the paper. The vectors u⃗_1, u⃗_2, u⃗_3 form the three sides of a nondegenerate triangle in the plane of the paper. ν̂×u⃗_i is obtained by rotating u⃗_i 90^∘ counterclockwise. If we rotate the triangle with sidesu⃗_1, u⃗_2, u⃗_3 90^∘ counterclockwise, we obtain a triangle with sides ν̂×u⃗_1, ν̂×u⃗_2, ν̂×u⃗_3. The length of side ν̂×u⃗_i is ν̂×u⃗_i=u⃗_i, and u⃗_i is perpendicular to that side and points out of the triangle. Let K' be the right cylinder whose base is the triangle with sides ν̂×u⃗_i, and whose generators perpendicular to the base have length L. The base and top of the cylinder have area A = u⃗_1 ×u⃗_2/2 and their unit outward normals are ν⃗ and -ν⃗. The three rectangular faces of K' have areas Lu⃗_i and unit outward normals u⃗_(i)/u⃗_(i) Applying (13.2.21) to this K' givesAS⃗(ν̂)+ AS⃗(-ν̂)+ ∑_i=1^3 Lu⃗_iS⃗(u⃗_i/u⃗_i)=0⃗_V. But S⃗(ν̂)=-S⃗(-ν̂) so dividing by L and using (13.2.22) gives (13.2.23) [correction (13.2.24)]. Corollary 13.2.44 (to Lemma 13.2.30.) Under the hypotheses of lemma 13.2.30, there is a unique S …such that for all n̂∈ N_U S⃗(n̂)=n̂.S.(13.2.25)So, Backus uses two lemmas to prove the Cauchy's theorem. The first lemma 13.2.29 leads to the fundamental integral equation (13.2.15) for traction vectors at the exact point r⃗_0, that has some important enhancements as compared with other works. In the second lemma 13.2.30, he tries to prove the existence of stress tensor based on equation (13.2.15).In the second lemma there is a process similar to the process in the Noll's proof in <cit.> to prove the properties of the linear transformation for traction vectors that we have discussed it in the previous sections. But here this proof is on a different base from the Noll's proof. We saw that the Noll's proof was based on the infinitesimal volume and where the element's lengths approach zero <cit.>. But here, Backus applies the Noll's equation (13.2.22) to the obtained integral equation (13.2.15) that is exactly valid for any volume of the mass element. Therefore, all of the relevant challenges to this step are removed in the Backus proof. A challenge is related to part (a) in the proof of lemma 13.2.30, where in the equation:L^2S⃗(n̂)+L^2S⃗(-n̂)+ ϵ L (S⃗(n̂_1)+S⃗(-n̂_1)+S⃗(n̂_2)+S⃗(-n̂_2))=0⃗_V.the expression “Hold L fixed and let ϵ→ 0. Then divide by L^2 and (13.2.23) is the result”, may be interpreted as the result is valid only for a thin flat rectangular box (i.e., infinitesimal volume). But if we replace this expression by: “Hold L fixed and let ϵ change, it is not necessary that ϵ be a small value. Since the first two terms are independent of ϵ, we must have S⃗(n̂)+S⃗(-n̂)=0⃗_V. So, (13.2.23) is the result.” This implies that the important equation S⃗(-n̂)=-S⃗(n̂) is independent of the volume of mass element. Therefore, the challenges related to the infinitesimal volume are removed. Then, in parts (b) and (c), Backus proves exactly the essential properties of a linear transformation in vector space for S⃗(n̂). Since a linear transformation in vector space can be shown by a second order tensor, the Backus's proof of the existence of stress tensor is completed. Also, in order to derive the differential equation of conservation of linear momentum, Backus uses the divergence theorem.§ CONCLUSIONIn this article, we studied the tetrahedron arguments and the proofs of the existence of stress tensor in the literature. First, we showed the birth, importance and location of the tetrahedron argument and the existence of stress tensor in the foundation of continuum mechanics. By representation of the formal tetrahedron argument in detail, that is presented in many books, we extracted some fundamental challenges and discussed their importance. These conceptual challenges are related to the result of applying the conservation of linear momentum to any mass element in continuum media, the order of magnitude of the surface and volume terms in the integral equation of conservation of linear momentum, the definition of traction vectors on the surfaces that pass through the same point, the limited and approximate processes in the derivation of stress tensor, and some others. Then, in a comprehensive review of a large number of the relevant books during about two centuries from 1823 until now, we presented the different versions of tetrahedron argument and the proofs of the existence of stress tensor, and in each of them the challenges and the improvements are discussed. They can be classified in two general approaches. In the first approach, that is followed in most texts, the traction vectors are not defined on the surfaces that pass through the same point, but in a limited and approximate process when the volume of the mass element goes to zero, the traction vectors on the surfaces of the mass element are regarded as the traction vectors on the surfaces that pass through the same point. In the second approach, that is followed in a few books, the traction vectors are exactly defined at the same point on the different surfaces that pass through that point. Then in a limited and approximate process when the volume of the mass element goes to zero, a linear relation that leads to the existence of stress tensor, is obtained. By this approach some of the challenges are removed. We also presented and discussed the improved works of Hamel and Backus. Most of the challenges on the existence of stress tensor are removed in the unknown and original work of Backus. We presented the main parts of this proof and studied its improvements and challenges.plain
http://arxiv.org/abs/1706.08518v3
{ "authors": [ "Ehsan Azadi" ], "categories": [ "physics.hist-ph", "math.HO", "physics.flu-dyn" ], "primary_category": "physics.hist-ph", "published": "20170626152343", "title": "Cauchy Tetrahedron Argument and the Proofs of the Existence of Stress Tensor, a Comprehensive Review, Challenges, and Improvements" }
Dipartimento di Fisica dell'Università di Pisa and INFN, Largo Pontecorvo 3, I-56127 Pisa, ItalyWe investigate systems of interacting bosonic particles confined within slab-like boxes of size L^2× Z with Z≪ L, at their three-dimensional (3D) BEC transition temperature T_c, and below T_c where they experience a quasi-2D Berezinskii-Kosterlitz-Thouless transition (at T_ BKT<T_c depending on the thickness Z). The low-temperature phase below T_ BKT shows quasi-long-range order: the planar correlations decay algebraically as predicted by the 2D spin-wave theory.This dimensional crossover, from a 3D behavior for T≳ T_c to a quasi-2D critical behavior for T≲ T_ BKT, can be described by a transverse finite-size scaling limit in slab geometries.We also extend the discussion to the off-equilibrium behavior arising from slow time variations of the temperature across the BEC transition.Numerical evidence of the 3D→2D dimensional crossover is presented for the Bose-Hubbard model defined in anisotropic L^2× Z lattices with Z≪ L. 05.70.Jk, 67.25.dj, 67.85.-d, 67.85.HjDimensional crossover of Bose-Einstein condensation phenomenain quantum gases confined within slab geometries Francesco Delfino and Ettore Vicari December 30, 2023 =================================================================================================================§ INTRODUCTIONThe Bose-Einstein condensation (BEC) characterizes the low-temperature behavior of three-dimensional (3D) bosonic gases, below a finite-temperature BEC phase transition separating the high-temperature normal phase and the low-temperature superfluid BEC phase. The phase coherence properties of the BEC phase have been observed by several experiments, see e.g. Refs. <cit.>. Several theoretical ad experimetal studies have also investigated the critical properties at the BEC transition, when the condensate begins forming, see, e.g., Refs. <cit.>. Both the phase-coherence properties of the BEC phase and the critical behavior at the BEC transition turn out to be particularly sensitive to the inhomogeneous conditions arising from spatially-dependent confining potentials, and/or the geometry of the atomic-gas system.Inhomogeneous conditions due to space-dependent trapping potentials give rise to a universal distortion of the homogeneous critical behavior, which can be cast in terms of a universal trap-size scaling <cit.> controlled by the same universality class of the 3D BEC transition. In the case of homogeneous traps, such as those experimentally realized in Refs. <cit.>, the geometry of the trap may lead to quite different phase-coherence properties, when passing from 3D, to quasi-2D, or quasi-1D systems. For example, atomic gases in elongated homogeneous boxes <cit.> and harmonic traps <cit.> show a dimensional crossover from a high-temperature 3D behavior to a low-temperature quasi-1D behavior.In this paper we consider bosonic particle systems confined within slab geometries, i.e. within boxes of size L^2× Z with Z≪ L.We investigate their behavior at the BEC transition temperature T_c (this is the critical temperature of the 3D system in the thermodynamic limit, i.e. when all system sizes tend to infinity) and at lower temperatures. Their low-temperature behavior (T<T_c) is further characterized by the possibility of undergoing a finite-temperature transition to a quasi-long range order (QLRO) phase, with long-range planar correlations which decay algebraically. This is the well-known Berezinskii-Kosterlitz-Thouless (BKT) transition <cit.>, which occurs in 2D statistical systems with a global U(1) symmetry.Experimental evidences of BKT transitions have been also reported for quasi-2D trapped atomic gases <cit.>.The behavior of homogeneous gases in slab geometries can be described in terms of a dimensional crossover, from 3D behaviors for T≳ T_c to a quasi-2D critical behavior for T≲ T_ BKT.In the limit of large thickness Z, the quasi-2D BKT transition temperature approaches that of the 3D BEC transition, i.e.T_ BKT→ T_c for Z→∞ (assuming the thermodynamic limit for the planar directions, i.e. L≫ Z).The interplay of the BEC and BKT critical modes gives rise to a quite complex behavior. We show that this can be described by a transverse finite-size scaling (TFSS) limit for systems in slab geometries <cit.>, i.e., Z→∞ and T→ T_c keeping the product (T-T_c)Z^1/ν fixed, where ν is the correlation-length exponent at the 3D BEC transition.In this TFSS limit the BKT transition below T_c appears as an essential singularity of the TFSS functions.The dimensional-crossover scenario is expected to apply to any quantum gas of interacting bosonic particles confined in boxes or lattice structures with slab geometries.Analogous arguments apply to ^4He systems in film geometries <cit.>, and to 3D XY spin models defined in lattices with slab geometries <cit.>.We also extend the discussion to the off-equilibrium behavior arising from slow time variations of the temperature across the BEC transition.The behavior of weakly interacting atomic gas confined in quasi-2D geometries has been experimentally investigated under time-dependent protocols across the BEC regime, see, e.g., Refs. <cit.>, to verify the Kibble-Zurek mechanism of defect production <cit.>.In gases confined within slab geometries, the off-equilibrium behavior arising from the slow variation of the temperature across the BEC transition point is made particularly complex by the presence of the quasi-2D BKT transition at T_ BKT≲ T_c.Thus, disentangling the behaviors corresponding to 3D BEC and quasi-2D BKT transitions may be quite hard in experimental or numerical analyses.To describe this complex behavior, we put forward the emergence of an off-equilibrium transverse finite-size scaling for bosonic gases confined within slab-like homogeneous traps.We provide evidence of the dimensional-crossover scenario in quantum gases by a numerical study of the Bose-Hubbard (BH) model <cit.>, which models gases of bosonic atoms in optical lattices <cit.>.We show that the predictions of the 3D→2D dimensional crossover are realized when considering anisotropic slab-like lattices L^2× Z with Z≪ L.With decreasing T from the high-temperature normal phase, we first meet a quasi-BEC transition where the critical length scale ξ gets large, but it does not diverge being limited by ξ∼ Z (keeping Z fixed). Then we observe a BKT transition to a QLRO phase, where the system develops planar critical correlations essentially described by a Gaussian spin-wave theory.The dimensional crossover explains the apparently complex behavior of the one-particle correlation functions and the corresponding length scale, when decreasing the temperature from T>T_c, where T_c is the 3D BEC transition temperature, to T < T_ BKT<T_c, where T_ BKT depends on the thickness Z. The results are also consistent with the scaling predictions of the TFSS theory for systems in slab geometries.The paper is organized as follows.In Sec. <ref> we introduce the BH model that we use as a paradigmatic model of Bose gases showing the phenomenon of dimensional crossover in slab geometries.In Sec. <ref> we present the general theory of the dimensional crossover in slab geometries.In Sec. <ref> we discuss the new features arising from the presence of a harmonic trap along the shorter transverse direction. Sec. <ref> reports some exact spin-wave results for the phase-coherence correlations within the low-temperature phase of quasi-2D systems with U(1) symmetry. In Sec. <ref> we discuss the off-equilibrium behavior arising from slow time variations of the temperature across the BEC transition.In Sec. <ref> we provide numerical evidences of the dimensional crossover in 3D BH models defined on lattices with slab geometries.Finally, we summarize our results in Sec. <ref>. § THE BOSE-HUBBARD MODEL IN SLAB GEOMETRIESLattice BH models <cit.> are interesting examples of interacting Bose gases undergoing BEC transitions. They provide realistic models of gases of bosonic atoms in optical lattices <cit.>.In the following discussions we use the BH model as a paradigmatic model of Bose gases showing the dimensional crossover in slab geometries.The Hamiltonian of BH models readsH_ BH =- t ∑_⟨ ij⟩ (b_i^† b_j+ b_j^† b_i) + + U 2∑_i n_i(n_i-1) - μ∑_i n_i ,where b_i is a bosonic operator, n_i≡ b_i^† b_i is the particle density operator, the sums run over the bonds ⟨ ij ⟩ and the sites i of a cubic L_1× L_2× L_3 lattice, a=1 is the lattice spacing. The phase coherence properties can be inferred from the one-particle correlation functionG( r_1, r_2) ≡ Trb_ r_1^† b_ r_2 e^-H_ BH/T Tre^-H_ BH/T.We set the hopping parameter t=1, so that all energies are expressed in units of t, and the Planck constant ℏ=1.The phase diagram of 3D BH models and their critical behaviors have been much investigated, see e.g. Refs. <cit.>.Their T-μ phase diagram presents a finite-temperature BEC transition line. This is characterized by the accumulation of a macroscopic number of atoms in a single quantum state, which gives rise to a phase-coherent condensate.See for example Fig. <ref>, which shows a sketch of the phase diagram of 3D BH models in the hard-core U→∞ limit, where the occupation site number is limited to the cases n=0, 1.The condensate wave function provides the complex order parameter of the BEC transition, whose critical behavior belongs to the U(1)-symmetric XY universality class. This implies that the length scale ξ of the critical modes diverges at T_c as <cit.>ξ∼ (T-T_c)^-ν,ν=0.6717(1).This has been accurately verified by numerical studies, see, e.g., Refs. <cit.>.The BEC phase extends below the BEC transition line. In particular, in the hard-core limit U→∞ and for μ=0 (corresponding to half filling), the BEC transition occurs at <cit.> T_c=2.01599(5).We consider BH lattice gases in anisotropic slab-like geometries, i.e. L^2× Z lattices with Z ≪ L.We consider open boundary conditions (OBC) along the transverse Z-direction; we label the corresponding coordinate as -(Z-1)/2≤ z ≤ (Z-1)/2, so that the innermost plane is the z=0 plane.This choice is motivated by the fact that OBC correspond to gas systems trapped by hard walls, such as the experimental systems of Refs. <cit.>.Since the thickness Z of the slab is generally considered as much smaller than the size L of the planar directions, and in most cases we consider the 2D thermodynamic L→∞ limit keeping Z fixed, the boundary conditions along the planar directions are generally irrelevant for our study around T_c. However, they become relevant at the BKT transition where the planar correlation length diverges. In the following we consider the most convenient periodic boundary conditions (PBC) along the large planar dimensions; the corresponding site coordinates are x=(x_1,x_2) with x_1,2=1,...,L.We want to understand how the phase diagram and critical behavior change when varying the thickness Z.As we shall argue, BH systems below T_c are expected to develop quasi-2D critical modes, leading to a BKT transition with a diverging planar correlation length, and a low-temperature QLRO phase. To study this phenomenon, and in particular how the Z→∞ limit eventually realizes the 3D critical behavior at T_c, we focus on the behavior of the correlation function (<ref>) along the planar directions.In particular, for simplicity reasons, we study the correlation function between points belonging to the central z=0 plane, i.e.g( x_1- x_2) ≡ G[( x_1,0), ( x_2,0)],where we have taken into account the invariance of the system for translations along the 1̂ and 2̂ directions.In particular, we consider the planar susceptibilityχ = ∑_ x g( x)and the planar second-moment correlation length ξ ξ^2 = 1 4χ∑_ x x^2 g( x).More precisely, since we consider PBC along the planar directions, we use the equivalent definitionξ^2 ≡1/4 sin^2 (π/L)g̃( 0) - g̃( p)/g̃( p), where g̃(p) is the Fourier transform of g( x), and p=(2π/L,0).The helicity modulus Υ is a measure of the response of the system to a phase-twisting field along one of the lattice directions <cit.>. In the case of bosonic systems, it is related to the superfluid density <cit.>. We consider the helicity modulus along the planar directions 1̂ and 2̂, i.e.,Υ_a ≡1/Z. ∂^2 F(ϕ_a)/∂ϕ_a^2|_ϕ_a=0≡T Z Y_a,where F=-Tln Z is the free energy, ϕ_a are twist angles along one of the planar directions. Note that Y_1=Y_2 by symmetry for L^2× Z systems. As we shall see, the quantitiesY≡ Y_a, R_L≡ξ/L,are particularly useful to check the effective spin-wave behavior along the planar directions for T≤ T_ BKT<T_c. § DIMENSIONAL CROSSOVER OF BOSE GASES IN SLAB GEOMETRIES§.§ Phase diagramfor a finite thickness Z The 3D scenario sketched in Fig. <ref> substantially changes if we consider a quasi-2D thermodynamic limit, i.e. L→∞ keeping Z fixed.Indeed the length scale ξ remains finite at the BEC transition point when Z is kept fixed.Of course the full 3D critical behavior must be somehow recovered when Z→∞, for which one expects ξ(Z) ∼ Z. More precisely, definingR_Z =lim_L→∞ ξ/Z, standard FSS arguments <cit.> predict that at the 3D critical point T_cR_Z(T_c)= R_Z^* + O(Z^-ω)where R_Z^* is a universal constant and ω=0.785(20) is the scaling-correction exponent associated with the leading irrelevant perturbation at the XY fixed point <cit.>.Note that the universal constant R_Z^* depends on the boundary conditions along the transverse direction (the boundary conditions along the planar directions are irrelevant since we assume L≫ Z and ξ∼ Z).However, we should also take into account that 2D or quasi-2D systems with a global U(1) symmetry may undergo a finite-temperature transition described by the BKT theory <cit.>.The BKT transition separates a high-temperature normal phase and a low-temperature phase characterized by QLRO, where correlations decay algebraically at large distances, without the emergence of a nonvanishing order parameter <cit.>.When approaching the BKT transition point T_ BKT from the high-temperature normal phase, these systems develop an exponentially divergent correlation length:ξ∼exp( c/√(τ)),τ≡ T/T_ BKT-1,where c is a nonuniversal constant. The magnetic susceptibility diverges as χ∼ξ^7/4, corresponding to the critical exponent η=1/4.Consistently with the above picture, 2D BH systems [corresponding to the Hamiltonian (<ref>) with Z=1] undergo a BKT transition. Fig. <ref> shows a sketch of the phase diagram of 2D BH systems in the hard-core U→∞ limit.The finite-temperature BKT transition of BH models has been numerically investigated by several studies, see e.g. Refs. <cit.>.In particular, T_ BKT=0.6877(2) in the hard-core U→∞ limit and for μ=0 <cit.>.Note that the 2D BH systems do not show a real BEC below the critical temperature T_ BKT, but QLRO where the phase-coherence correlations decay algebraically.The phase diagram of quasi-2D systems with finite thickness Z>1 is expected to be analogous to that of 2D BH systems, with a BKT transition at T_ BKT depending on the thickness Z.Analogously to 2D systems, they are expected to show a QLRO phase below T_ BKT, where correlation functions show power-law decays along the planar directions, as described by the 2D spin-wave theory.§.§ Dimensional crossover limitThe above scenario can be interpreted as a dimensional crossover from a 3D behavior when T≳ T_c, and ξ is finite (in particular the anisotropy of the system is not locally relevant when ξ≪ Z), to an effective 2D critical behavior at T≲ T_ BKT(Z) where the planar correlation length ξ diverges.Such a dimensional crossover can be described by an appropriate TFSS limit, defined as δ≡ 1 - T/T_c → 0 and Z→∞, keeping δ Z^1/ν fixed.In this TFSS limit <cit.>R_Z≡ξ/Z ≈ R(X),X= Z^1/νδ ,where R(X) is a universal function (apart from a trivial normalization of the argument X), but depending on the boundary conditions along the Z direction.Scaling corrections are suppressed as Z^-ω, analogously to Eq. (<ref>).In this TFSS framework the BKT transition appears as an essential singularity of the scaling function R(X):R(X) ∼exp(b√(X_ BKT-X))forX→ X_ BKT^-,where X_ BKT is the value of the scaling variable X corresponding to the BKT transition pointδ_ BKT(Z) ≡T_c - T_ BKT(Z)T_c,i.e.,X_ BKT =lim_Z→∞ Z^1/νδ_ BKT(Z) > 0.The constant b in Eq. (<ref>) is a nonuniversal constant depending on the normalization of the scaling variable X.R(X) is not defined for X≥ X_ BKT.Note that the above scaling equations predict that <cit.>δ_ BKT(Z) ∼ Z^-1/νin the large-Z limit.The TFSS of the planar two-point function (<ref>) is given byg( x,Z)≈ Z^-(1+η) G( x/Z,X),where η=0.0381(2) is the critical exponent of the 3D XYuniversality class <cit.>, associated with the power-law decay of the two-point function at T_c.Eq. (<ref>) also implies that the planar susceptibility defined as in Eq. (<ref>) behaves asχ≈ Z^1-η f_χ(X). It is important to note that the above features are shared with any quasi-2D statistical system with a global U(1) symmetry, and in particular standard O(2)-symmetric spin models. Numerical analyses of dimensional crossover issues for the XY model are reported in Refs. <cit.>.§ BOSE GASES CONFINED BY A TRANSVERSE HARMONIC TRAP We now discuss the case of quasi-2D gases trapped by a harmonic potential along the transverse direction, analogously to the experimental setup of Ref. <cit.>. §.§ The BH model in a transverse harmonic trapIn the case of the BH model the presence of aspace-dependent trapping potential can be taken into account by adding a further Hamiltonian term to Eq. (<ref>), i.e.H_ hBH = H_ BH + ∑_i V(z_i) n_i, V(z)= |z/ℓ|^p,where z_i is the distance of the site i from the central plane, p>0, and ℓ can be considered as the transverse trap size.The harmonic potential corresponds to p=2.The transverse trapping potential coupled to the particle density turns out to be equivalent to an effective chemical potential depending on the transverse coordinate z,μ_e(μ,z) ≡μ - V(z).Far from the central z=0 plane, the potential V(z) diverges, thus μ_e→ -∞ therefore ⟨ n_i⟩ vanishes and the particles are trapped along the transverse direction.We discuss the behavior of the system in the limit of infinite size of the planar dimensions, along which the system appears as homogeneous. For practical realizations, this regime may be realized by considering hard-wall traps along the planar directions with size L≫ℓ (more precisely L≫ℓ^θ where the exponent θ<1 is given below).The planar correlation functions, for example along the z=0 plane, are expected to behave similarly to the case of transverse hard-wall traps.With decreasing T from the high-temperature normal phase, the length scale ξ gets large around the BEC transition temperature T_c (i.e. the critical temperature of the BEC transition of the corresponding homogeneous 3D system). But it does not diverge, sinceξ∼ℓ^θ where θ is an appropriate exponent, see below. Then one may observe a BKT transition to a QLRO phase around the z=0 plane, at T_ BKT<T_c depending on ℓ.In particular, in the extreme ℓ→ 0 limit, where all particles are confined within the z=0 plane, we recover the homogeneous 2D BH model, i.e. the model (<ref>) with Z=1.On the other hand, in the opposite ℓ→∞ limit, we again expect that T_ BKT(ℓ)→ T_c^-, analogously to the homogeneous case.Therefore, similarly to the homogeneous case, the system passes from a high-temperature 3D behavior to a quasi-2D critical temperature at low temperature. This change of regime may be also related to a transverse condensation phenomenon <cit.>.§.§ Transverse trap-size scalingLike homegeneous systems with transverse hard-wall boundary conditions, the critical behavior of the 3D system must be somehow recovered in the large-ℓ limit, in a spatial region sufficiently close to the central z=0 plane.We argue that this limit can be described by a universal transverse-trap-size scaling (TTSS), similar to the TFSS limit discussed in Sec. <ref>.To derive the TTSS laws for the case at hand, we can exploit the same argumentsused to derive the trap-size scaling for isotropic traps <cit.>. The trapping potential (<ref>) coupled to the particle density significantly affects the critical modes, introducing another length scale ℓ.Like general critical phenomena, see, e.g., Ref. <cit.>, the asymptotic scaling behavior of the length scale at T_c is expected to be characterized by a power law:ξ_t ∼ℓ^θ.The exponent θ can be determined by a scaling analysis of the perturbation associated with the external potential coupled to the particle density.Its derivation is identical to that reported in Refs. <cit.> for isotropic traps.The exponent θ turns out to be related to the correlation-length exponent ν of the universality class of the critical behavior of the homogeneous BEC transition, i.e.,θ = pν 1 + p ν,where ν=0.6717(1) is the correlation-length exponent of the 3D XY universality class. For harmonic transverse traps, i.e. p=2, θ=0.57327(4). On the basis of these TTSS arguments, we expect that the asymptotic large-ℓ behavior of the two-point function around the central z=0 plane, and in particular the correlation function defined as in Eq. (<ref>), behaves asg( x,ℓ)≈ξ_t^-(1+η) G_p( x/ξ_t,δξ_t^1/ν),where ξ_t∼ℓ^θ, δ≡ 1 - T/T_c, and we have assumed that the planar sizes are infinite. Actually, one may also take into account the planer size L by adding a further scaling variable L/ℓ^θ; the L→∞ scaling behavior (<ref>) is recovered when L/ℓ^θ≫ 1.The TTSS of the two-point function implies that the planar second-moment correlation length along the z=0 plane, defined as in eq. (<ref>), behaves asymptotically asξ_t ≈ℓ^θ R_p( X), X≡δℓ^θ/ν.In particular, we recover ξ_t∼ℓ^θ at T_c.Note that this scaling behavior is analogous to that of hard-wall traps, cf. Eq. (<ref>), with the transverse size Z replaced by ℓ^θ.The leading corrections to the above asymptotic TTSS are O(ℓ^-ωθ). Note that the trap-exponent θ reported in Eq. (<ref>) is identical to that of isotropic traps <cit.>, i.e. it does not depend on the number of coordinates entering the space-dependence of the inhomogeneous power-law potential coupled to the particle density. However, the scaling functions G_p and R_p, entering Eqs. (<ref>) and (<ref>), must definitely differ.Actually, in the p→∞ limit we must recover the TFSS behavior, i.e. that of the homogeneous conditions along the transverse direction with OBC, see Sec. <ref>. Since θ→ 1 for p→∞, ℓ≈ Z of the transverse hard-wall conditions. The TTSS functions must present a singularity related to the BKT transition for T_ BKT<T_c, unlike those of the isotropic TSS because no such transition occurs for isotropic traps.In particular, TTSS implies thatδ_ BKT(ℓ) ≡ 1 - T_ BKT(ℓ)/T_c ∼ℓ^-θ/ν,and the TTSS function f_ξ of (<ref>)must show a BKT-like singularity atX_ BKT =lim_ℓ→∞ δ_ BKT(ℓ)ℓ^θ/ν,such as that reported in Eq. (<ref>). §.§ Criticality at the boundary of the BEC regionOther interesting features arise at the boundary of the BEC region in atomic gases confined by a transverse harmonic trap.If the trap is sufficiently large and the temperature is sufficiently low, different phases may coexist in different space regions, when moving from the central z=0 plane of the trap.Indeed, due to the fact that the effective chemical potential μ_e(z), cf. Eq. (<ref>), decreases with increasing z, the BEC region is generally spatially limited.When moving from the z=0 plane, the quantum gas passes from the BEC phase around the center of the trap (where space coherence is essentially described by spin waves) to a normal phase far from the center.The atomic gas is expected to develop a peculiar critical behavior at the boundary of the BEC region, with a nontrivial scaling behavior controlled by the universality class of the homogenous BEC transition in the presence of an effective linear external potential coupled to the particle density <cit.>.This occurs around the planes where the distance |z| from the z=0 plane is such that T[μ_e(μ,z)] is equal to the BEC trasition temperature at the local chemical potential μ_e(μ,z)=μ - (z/ℓ)^2, i.e. whenT_c[μ_e(μ,z)] ≈ T < T_c(μ).For example, consider the hard-core BH lattice gas (<ref>) for μ≤ 0 and T<T_c(μ), see Fig. <ref>.Since T_c(μ) decreases with decreasing μ, a plane exists at distance z=z_b such that T_c[μ_e(μ,z_b)] = T, thus z_b= ℓ√(μ-μ̅), where T_c(μ̅)=T. This plane separates the BEC region from the normal-fluid region.As argued in Ref. <cit.>, in the limit of large ℓ, the correlation functions around the surface where T_c[μ_e( r)]=T are expected to develop a peculiar critical behavior in the presence of an external effectively linear potential coupled to the particle density.Around z=z_bV(z) = V(z_b) + Δ z/ℓ_b + O[(Δ z/ℓ_b)^2]withℓ_b = ℓ 2 √(μ-μ̅).The critical behavior at the critical planes z=z_b is essentially determined by the linear termV_b = Δ z/ℓ_b, Δ z ≡ z - z_b.where ℓ_b provides the length scale of the spatial variation. Since √(μ-μ̅)>0 is assumed finite and fixed, ℓ_b∼ℓ.Of course, an analogous behavior occurs on the opposite side, i.e. for z = -z_b.The scaling behavior around the critical plane z=z_b can be derived using the same arguments of Ref. <cit.>, applying them to the particular case of slab geometries where the harmonic potential is only applied along the transverse direction, while the system is translationally invariant along the planar directions. The system develops critical correlations around the planes z=z_b, with a length scaleξ_b∼ℓ_b^θ_b,θ_b = ν 1 + ν = 0.40181(3). For example, the one-particle correlation function along a transverse direction is expected to scale asG[( x,z_1), ( x,z_2)] ≈ξ_b^-1-η G_b(Δ z_1/ξ_b, Δ z_2/ξ_b). Of course, such a scaling behavior at the critical planes is anisotropic, distinguishing the planar and transverse directions. However, both length scales along planar and transverse directions are expected to scale as ℓ_b^θ_b. § LOW-TEMPERATURE BEHAVIOR OF QUASI-2D BOSONIC GASES This section summarizes some exact results which are expected to characterize the low-temperature QLRO phase ofquasi-2D interacting bosonic gases up to the BKT transition. §.§ The QLRO phase below the BKT transitionThe general universal features of the QLRO phase of quasi-2D systems with a U(1) symmetry are described by the Gaussian spin-wave theoryH_ sw = β 2∫ d^2 x (∇φ)^2.For β≥ 2/π, corresponding to 0≤η≤ 1/4, this spin-wave theory describes the QLRO phase.The values β=2/π and η=1/4 correspond to the BKT transition <cit.>.The spin-wave correlation functionG_ sw( x_1- x_2) =⟨ e^-iφ( x_1) e^iφ( x_2)⟩is expected to provide the asymptotic large-L behavior of the two-point function of 2D interacting bosonic gases within the QLRO phase.For | x_1 -x_2|≪ L, G_ sw( x_1, x_2)∼1 | x_1- x_2|^η,where the exponent η is related to the coupling β byη = 1 2πβ.The general size dependence of G_ sw on a square L^2 box with PBC is also known: <cit.>G_ sw( x,L) = C( x,L)^η× E( x,L), C( x,L) = e^π y_2^2θ'_1(0,e^-π) | θ_1[π(y_1+iy_2),e^-π]|,E( x,L)= ∑_n_1,n_2=-∞^∞ W(n_1,n_2)cos[2π(n_1 x_1+n_2 x_2)] ∑_n_1,n_2=-∞^∞ W(n_1,n_2) ,W(n_1,n_2) = exp[-π(n_1^2+n_2^2)/η],where x≡ (x_1,x_2), y_i≡ x_i/L, θ_1(u,q) and θ_1'(u,q) are θ functions <cit.>.Using Eq. (<ref>), one can easily compute the universal function R_L(η), where R_L≡ξ/L and ξ is the second-moment correlation length defined asξ^2 =L^2 4 π^2( χχ_1-1), χ =∫ d^2xG_ sw( x),χ_1 =∫ d^2x cos(2π x_1 L) G_ sw( x). Analogous results are obtained for the helicity modulus <cit.>Y(η) = 1 2πη - ∑_n=-∞^∞ n^2 exp(-π n^2/η)η^2 ∑_n=-∞^∞ exp (-π n^2/η). The above asymptotic large-L behaviors (at fixed T or η) are approached with power-law corrections, indeedR_L(L,η)≡ξ/L = R_L(η) + a L^-ε, Y(L,η) = Y(η) + a L^-ζ,respectively, where ε and ζ are the exponents associated with the expected leading corrections:<cit.>ε= Min[2-η,κ],ζ= Min[2,κ], κ=1/η-4 + O[(1/η-4)^2]. With increasing T within the QLRO phase, the critical exponent η of the two-point function, cf. Eq. (<ref>), increases up to η=1/4 corresponding to the BKT transition.Therefore, close to the BKT transition, i.e. for T≲ T_ BKT, we may expand the universal curves R_L(η) and Y(η) around η=1/4, obtainingR_L(η)=0.7506912222 + 1.699451 ( 1 4-η) + …, Y(η)=0.6365081782 +2.551196( 1 4-η) + …We expect that the above universal behaviors are also realized in the low-temperature phase of BH models within slab geometries, for T<T_ BKT, by the two-point functions g( x), cf. Eq. (<ref>), and the quantities R_L≡ξ/L and Y defined in Eq. (<ref>).§.§ Finite-size behavior at the BKT transition The BKT transition is characterized by logarithmic corrections to the asymptotic behavior, due to the presence of marginal renormalization-group (RG) perturbations at the BKT fixed point <cit.>. The asymptotic behaviors at the BKT transition for R_L and Y can be obtained by replacing <cit.>1/4-η≈1 8 w,w = lnLΛ + 12lnlnLΛ,into Eqs. (<ref>) and (<ref>).The nonuniversal details that characterize the model (such as the thickness Z of the quasi-2D BH models) are encoded in the model-dependent scale Λ.Thus one obtains the asymptotic large-L behaviorR(L,T_ BKT) = R^* + C_R w^-1 + O(w^-2).for both R=Y, R_L, withY^* = 0.6365081789,C_Y = 0.31889945, R_L^* = 0.7506912222, C_R_L = 0.21243137,for PBC.In numerical analyses, Eq. (<ref>) may be used to locate the BKT transition point, i.e. by requiring that the finite-size dependence of the data matches it.However we note that this straightforward approach is subject to systematic errors which get suppressed only logarithmically with increasing L.This makes the accuracy of the numerical or experimental determination of the critical parameters quite problematic. This problem can be overcome by the so-called matching method <cit.>, which allows us tocontrol the whole pattern of the logarithmic corrections, leaving only power-law corrections. The matching method exploits the fact that the finite-size behavior of RG invariant quantities R, such as R_L and Y, of different models at their BKT transition shares the same logarithmic corrections apart from a nonuniversal normalization of the scale.Indeed, the L-dependence of two models at their BKT transition is related by the asymptotic relationR^(1)(L_1, T_ BKT^(1)) ≈R^(2)(L_2=λ L_1, T_ BKT^(2)) ,apart from power-law corrections, which are O(L^-2) for the helicity modulus Y and O(L^-7/4) for the ratio R_L. The matching parameter λ is the only free parameter, but it does not depend on the particular choice of the RG invariant quantity.The matching method consists in finding the optimal value of T matching the finite-size behavior of Y and R_L of the 2D XY model whose value of T_ BKT is known with high accuracy <cit.>.The complete expression of R_L and Y of the 2D XY model have been numerically obtained by high-precision numerical studies <cit.> and by extrapolations using RG results for the asymptotic behavior.For example,the L-dependence of the helicity modulus Y at the BKT transition of the 2D XY model is accurately reconstructed by the following expression <cit.>Y_XY(L)≡Y_XY(T_ BKT,L) ==0.6365081782+ 0.318899454w^-1+2.0319176w^-2- 40.492461 w^-3+325.66533w^-4 - 874.77113w^-5+8.43794 L^-2 + 79.1227 L^-4- 210.217 L^-6,where w is given in Eq. (<ref>) with Λ=Λ_XY=0.31.The matching method has been already applied <cit.> to the 2D BH models (<ref>), obtaining the accurate estimate T_ BKT=0.6877(2) in the hard-core U→∞ limit and at half filling (μ=0).§ OFF-EQUILIBRIUM SLOW DYNAMICSANDDIMENSIONAL CROSSOVER The dynamical behavior of statistical systems driven across phase transitions is a typical off-equilibrium phenomenon.Indeed, the large-scale modes present at the transition are unable to reach equilibrium as the system changes phase, even when the time scale t_s of the variation of the system parameters is very large.Such phenomena are of great interest in many different physical contexts, at both first-order and continuous transitions, where one may observe hysteresis and coarsening phenomena, the Kibble-Zurek (KZ) defect production, etc, see, e.g., Refs. <cit.>.The correlation functionsobey general off-equilibrium scaling (OS) laws in the limit oflarge time scale t_s of the variations across the transition, which are controlled by the universal static and dynamic exponents of the equilibrium transition <cit.>.We now consider the off-equilibrium behavior arising from slow time variations of the temperature T across the BEC transition. We assume a standard linear protocol, varying T so thatδ(t)≡ 1 - T(t)/T_c = t/t_s, starting at a time t_i<0 in the high-T phase and ending at t_f > 0 in the low-T phase. t_s is the time scale of the temperature variation. The BEC transition point corresponds to t=0 (however this is not strictly required, it is only convenient for our discussion). Several experiments implementing off-equilibrium time-dependent protocols in cold-atom systems have been reported, see, e.g., Refs. <cit.>. Beside the static critical exponent <cit.> ν=0.6717(1) of the 3D XY universality class, we also need information on the critical dynamic behavior at the BEC transition. This is characterized by the dynamic exponent z=d/2, thus z=3/2 in 3D, associated with the model-F dynamics <cit.> which is conjectured to describe the dynamic universality class of the 3D BEC transition. In the standard thermodynamic limit of cubic-like boxes, with L_1∼ L_2 ∼ L_3 ∼ L and L→∞, one defines the OS limit as the large-time-scale limit, t_s→∞, keeping the OS scaling variablesT≡ t/t_s^κ, x_s ≡ x/t_s^ζ,fixed. Scaling arguments allow us to determine the appropriate exponents κ and ζ, obtaining <cit.>κ = zν 1 + z ν,ζ = ν 1 + zν,where ν and z are the static correlation-length and dynamic exponents. In particular, by inserting the values of ν and z, we obtain κ = 0.50188(4) and ζ = 0.33459(3).We may apply these OS arguments to the equal-time two-point correlation function, measured after a time t and averaged over the initial Gibbs distribution at a given initial temperature T>T_c.Standard scaling arguments lead to the OS asymptotic behaviors <cit.>G( x,t,t_s) ≈ t_s^-ζ(1+η) G_o( x_s, T).Moreover, we expectξ(t,t_s) ≈ t_s^ζR_o( T),for any length scale associated with the critical modes.Experimental studies of this dynamic behavior, and the related KZ defect production, led to the estimate <cit.> ζ=0.35(4), which is in good agreement with the theoretical result (<ref>). We now discuss how this off-equilibrium behavior may change in quantum gases confined within slab geometries with Z≪ L, and in particular with a finite thickness Z and infinite L→∞ planar sizes. Analogous experiments with quasi-2D cold-atom systems constrained in slab geometries have been reported in Refs. <cit.> (homogenous hard-wall traps along the planar directions and harmonic along the transverse direction).They observe the emergence of coherence when cooling the atomic gas through the BEC temperature.The off-equilibrium behavior arising from the slow variation of the temperature across the BEC transition point is made particularly complex by the presence of a close quasi-2D BKT transition. Thus, disentangling the behaviors corresponding to BEC and BKT is quite hard in experimental or numerical analyses.The authors of <cit.> interpreted the observed behavior as a transverse condensation phenomenon <cit.>.In the following we put forward an alternative framework to describe the dimensional crossover in slab geometries, based on an off-equilibrium FSS (OFSS).As already said, for a finite thickness Z, even though L→∞, the system does not develop a diverging correlation length at the 3D BEC transition temperature T_c, but ξ remains of the order of the transverse size Z. Thus the systems can evolve adiabatically, i.e. its evolution can be perfomed by passing through quasi-equilibrium states for a sufficiently large time scale t_s of the variation of T(t) around T_c.This is possible until it reaches the BKT transition at the time t>0 corresponding to T_ BKT, i.e. when t/t_s= δ_ BKT≡ 1 - T_ BKT/T_c. Of course, the OS at the BKT transition is expected to substantially differ from that at the 3D BEC transition, such as Eqs. (<ref>) and (<ref>), because it must be controlled by the 2D universality class of the BKT transition in quantum gases.At the BKT transition the relevant exponents for KZ off-equilibrium protocols are expected to be ν=∞ (related to the exponential increase of the correlation length when T→ T_ BKT^+) and z=1 (2D model F of the dynamics). Thus the power laws of the off-equilibrium scaling variables (<ref>) at the BKT transition lead to κ=1, apart from logarithms.However things become quite involved when the thickness Z becomes large because the BKT transition gets very close to the BEC temperature T_c, cf. Eq. (<ref>). Therefore, the analysis of numerical and experimental data may become hard, and straightforward power-law fits may turn out to be misleading. In order describe the time-dependent dimensional crossover of slab geometries under the protocol (<ref>), we consider an OFSS framework involving the size Z of the transverse direction.The appropriate OFSS limit is defined by introducing the scaling variablesX_o = Z^1/νδ(t), W_o= Z^-1/ζ t_s,In the OFSS limit a length scale, such as the planar correlation length defined in Eq. (<ref>), is expected to behave asξ(Z,t,t_s)≈ Z S_o(X_o,W_o), where S_o is a universal OFSS function.In this OS framework the equilibrium FSS around T_c is recovered in the limit W_o→∞, i.e.S_o(X_o,W_o→∞) =R(X_o)where R(X_o) is the equilibrium FSS function, cf. Eq. (<ref>).In particular, at t=0 corresponding to T(t)=T_c, we expect to recover the equilibrium result ξ∼ Z when t_s≫ Z^1/ζ.Note however that the equilibrium limit is not well defined for any X_o, because it diverges when X_o ≥ X_ BKT, cf. Eq. (<ref>), corresponding to the BKT transition. Around X_ BKT the behavior of the scaling functions must somehow show the off-equilibrium singularities associated with a slow passage thorough a BKT transition. The above scaling behaviors can be straightforwardly extended to the case of a transverse harmonic trap, using the same TTSS arguments of Sec. <ref>.Apart from replacing Z with ℓ^θ, the main features of the OS behavior remain the same.We mention that experiments under analogous time-dependent protocols crossing the BEC transitionhave been performed with atomic gases confined in slab-like traps with a transverse harmonic trapping potential <cit.>. They were able to check the initial 3D behavior, without a clear identification of the subsequent quasi-2D behavior. The computation of the defect production arising from the Kibble-Zurek mechanism is further complicated by later-time coarsening phenomena <cit.>. § NUMERICAL RESULTS FOR THE BH MODEL In order to check the dimensional crossover scenario discussed in the previous section, we present a numerical study of the equilibrium properties of the BH model (<ref>) in the hard-core U→∞ limit and at zero chemical potential μ=0, corresponding to half filling, i.e., ⟨ n_ r⟩ = 1/2 for any T. In the hard-core limit and for μ=0, the 3D BEC transition occurs at T_c=2.01599(5) and the 2D BKT transition at T_ BKT=0.6877(2).Numerical results are obtained by quantum Monte Carlo (QMC) simulations using the directed operator-loop algorithm <cit.>.We consider slab geometries, i.e. L^2× Z lattices with Z≪ L, with OBC along the transverse directions, and PBC along the planar directions. We present numerical results for some values of the thickness Z, in particular Z=5, 9, 13, various planar sizes up to L≈ 100, and several values of the temperature T≲ T_c.The maximum size Z of our numerical study is limited by the fact that the computational effort of QMC rapidly increases, because they also require larger values of the planar sizes.We compute the observables defined in Sec. <ref>.In QMC simulations the helicity modulus is obtained from the linear winding number W_a along the a^ th direction, i.e.Y≡ Y_a = ⟨ W_a^2 ⟩, W_a = N_a^+ - N_a^- Lwhere N_a^+ and N_a^- are the numbers of non-diagonal operators which move the particles respectively in the positive and negative a^ th direction.Figure <ref> shows data for the planar second-moment correlation length ξ defined in Eq. (<ref>), for Z=5, 9, 13 and T≲ T_c.We observe that ξ is small for T>T_c, and apparently L- and Z-independent (for sufficiently large L and Z), indicating that it remains finite in the large-L and large-Z limit.Around T_c the data of ξ appear to converge to a finite value when increasing L at fixed Z; however, they show that ξ increases with increasing Z, approximately as ξ∼ Z.Then, for sufficiently small values of T, the data begin showing a significant dependence on L. At low temperature we observe ξ∼ L at fixed T, suggesting that ξ diverges with increasing L even when keeping Z fixed.In the following we show that this apparently complicated behavior can be explained by the dimensional crossover scenario put forward in the previous sections. To begin with, we investigate the nature of the low-temperature behavior where the planar correlation length ξ appears to diverge with increasing L. According to the arguments of the previous sections, at low temperature BH systems for any thickness Z should show a quasi-2D QLRO phase, whose behavior is essentially described by the 2D spin-wave theory, see in particular Sec. <ref>.As discussed in Sec. <ref>, this implies universal relations among the ratio R_L≡ξ/L, the quasi-2D helicity modulus Y and the exponent η characterizing the planar two-point correlation function. In Fig. <ref> we plot data of R_L versus those of Y, comparing them with the universal curve R_L(Y) which can be easily obtained from the spin-wave results reported in Sec. <ref>. This curve ends at the BKT point (Y^*,R_L^*)=(0.6365..., 0.7506...).For sufficiently small T, depending on the value of Z, the data approach the universal spin-wave curve R_L(Y) with increasing L.Extrapolations using the expected power-law corrections, cf. Eqs. (<ref>) and (<ref>), turn out to be consistent with the exact spin-wave results. Therefore, the numerical results nicely support the existence of a QLRO phase for any Z, with the expected universal spin-wave behaviors.We also note that above a given temperature, depending on the thickness Z, the data do notapproach the spin-wave curve R_L(Y) anymore, as it is expected to occur for T>T_ BKT where both R_L and Y vanish in the large-L limit.Therefore, the data of Fig. <ref> allow us to approximately locate T_ BKT between the temperature values of the data closest to the BKT point (Y^*,R_L^*) which respectively approach the spin-wave curve and deviate from it.We already note that T_ BKT increases with increasing Z.This can be also inferred by the data of the helicity modulus Y versus the temperature, see Fig. <ref>. They are generally decreasing, and for sufficiently large T they appear to cross the value Y=Y^*≈ 0.6365 corresponding to the BKT transition, indicating that those values of T are larger than T_ BKT. More accurate estimates of T_ BKT can be obtained by looking for the optimal values of T achieving the matching of the available data of Y and R_L with the finite-size dependenceof the 2d XY model at its BKT transition, see Sec. <ref>. In particular, T_ BKT(Z) is given by the value of T providing the optimal matchingof the data of Y(Z,L,T) with the finite-size dependence of the helicity modulus of the2D XY model, i.e.Y(Z,L,T) = Y_XY[λ(Z) L] + O(L^-2),with Y_XY given by Eq. (<ref>).Some matching procedures are described in Ref. <cit.>.This numerical analysis largely suppresses the systematic error, because it is not affected by logarithmic corrections, but only O(L^-2) power-law corrections.For Z=1 the optimal matching led to the estimate T_ BKT(Z=1)=0.6877(2) and λ(Z=1) ≈ 1.5.We determine the optimal values of T and λ(Z) satisfying the scaling relation (<ref>).We skip most details of the numerical matching procedures, which can be found in Ref. <cit.>. We only mention that we use QMC data from L=20 to L=100, for sufficiently close values of T to obtain reliable estimates for any T by interpolation, see Fig. <ref>.Our estimates for the optimal matching parameters are T_ BKT(Z=5)=1.645(2), T_ BKT(Z=9)=1.829(1), and T_ BKT(13)=1.899(1); correspondingly we obtain λ(Z=5)=0.4(2), λ(Z=9)=0.20(5), λ(Z=13)=0.14(2).The statistical error of the analysis is estimated using bootstrap methods. The error reported above takes also into account the variations of the results when changing the procedure to obtain the optimal matching, for example when considering or not the O(L^-2) scaling corrections, and varying the minimum size L of the data used in the analysis. The quality of the matching can be inferred from Fig. <ref>, which shows the data at the optimal matching values of T_ BKT versus the ratio L/Λ(Z) with Λ(Z) = Λ_XY/λ(Z), so that all data of Y, for any Z, are expected to follow the same curve Y_XY versus L/Λ_XY with Λ_XY=0.31.This is indeed what we observe, apart from some scaling corrections at the smallest values of L, which are expected to get suppressed as O(L^-2).We consider the results of the matching analysis of the Y data as our best estimates of T_ BKT.Note also that the values of λ(Z) are decreasing, as expected because the value λ(Z) L is somehow related to the equivalent planar size of the lattice, and for slab geometries one may expect that this is approximately given by the aspect ratio L/Z, thus λ(Z)∼ 1/Z roughly.An analogous numerical analysis can be done using the data of R_L. However it turns out to be less accurate due to larger scaling corrections.As also observed in Ref. <cit.>, R_L is subject to significantly large power-law scaling corrections, which decrease as L^-7/4.The XY curve of R_L is reported in Ref. <cit.>.Note that once determined T_ BKT and λ(Z), there are no other free parameters to optimize the matching.The inset of Fig. <ref> shows the data and their comparison with the XY curve using the values of T_ BKT and λ(Z) obtained from the analysis of the data of Y. The data appear to approach the asymptotic curve with increasing L, therefore they are consistent with the theoretical predictions.However, as already mentioned, we note that the approach to the expected asymptotic behavior is characterized by larger scaling corrections, thus requiring larger lattice sizes to obtain independent estimates of T_ BKT as accurate as those obtained using the data of Y.Figure <ref> shows δ_ BKT(Z) ≡ 1 - T_ BKT(Z)/T_c versus Z^-1/ν, as obtained from the above estimates of T_ BKT. The data turn out to be consistent with the expected asymptotic behavior δ_ BKT(Z)∼ Z^-1/ν. We also estimateX_ BKT =lim_Z→∞ Z^1/νδ_ BKT = 3.2(1),by extrapolating the available data for the product Z^1/νδ_ BKT using the ansatzZ^1/νδ_ BKT = X_ BKT + cZ^-ω,see the inset of Fig. <ref>, where ω=0.785(20) is the leading scaling-correction exponent of the 3D XY universality class. Finally, we check the TFSS R_Z≈ f_ξ(X) with X=Z^1/νδ around T_c, in the planar thermodynamic limit, i.e. when ξ, Z≪ L.As argued in Sec. <ref>, the scaling function f_ξ(X) is expected to have an essential singularity at X_ BKT≈ 3.2, cf. Eq. (<ref>).In Fig. <ref> we show data of R_Z around T_c versus X≡δ Z^1/ν.They support the TFSS behavior of R_Z. Scaling corrections are expected to decrease as Z^-ω. They appear significantly larger for X>0, when approaching the singularity at X_ BKT.By extrapolating the available data at T_c using R_Z(Z,T_c) = R_Z^* + cZ^-ω (see the inset of Fig. <ref>), we estimate R_Z^*= 0.372(3) for the universal large-Z ratio R_Z≡ξ/Z characterizing the TFSS of the critical planar correlation length.An analogous scaling behavior is expected for the planar susceptibility defined as in Eq. (<ref>).The data shown in Fig. <ref> nicely support the corresponding TFSS (<ref>). § SUMMARY We have studied the phase-coherence properties of Bose gases confined within slab-like boxes of size L^2× Z with Z≪ L, at the 3D BEC transition temperature T_c and at lower temperatures.Unlike systems confined within cubic-like geometries, i.e. boxes with L∼ Z, the low-temperature behavior of gases confined within slab geometries is also characterized by the possibility of undergoing a finite-temperature quasi-2D BKT transition at T_ BKT<T_c with T_ BKT depending on the thickness Z. Below T_ BKT the planar one-particle correlations decay algebraically, as predicted by the QLRO of the 2D spin-wave theory. Therefore, Bose gases in slab geometries experience a dimensional crossover with decreasing T, from 3D behaviors for T≳ T_c to a quasi-2D critical behavior for T≲ T_ BKT.However, in the limit of large thickness Z the quasi-2D BKT transition temperature approaches that of the 3D BEC transition, i.e.T_ BKT→ T_c for Z→∞.The interplay of 3D and quasi-2D critical modes can be described by the TFSS limit for systems on slab geometries: Z→∞ and T→ T_c keeping the product (T-T_c)Z^1/ν fixed (the planar sizes are assumed to be infinite), where ν is the correlation-length exponent at the 3D BEC transition.The corresponding TFSS functions present an essential singularity due to the quasi-2D BKT transition below T_c. A similar TTSS behavior is also put forward in the case the particles are trapped by a transverse harmonic potential in the limit of large transverse trap size ℓ. In the TTSS framework the length scale ξ_t=ℓ^θ, where θ=2ν/(1+2ν)=0.57327(4), plays the same role of the transverse size Z of the TFSS.We also extend the discussion to the off-equilibrium behavior arising from slow time variations of the temperature T across the BEC transition. In particular we consider the linear protocol δ(t)≡ 1 - T(t)/T_c = t/t_s where t_s is a time scale. The corresponding off-equilibrium behavior is made particularly complex by the presence of the close quasi-2D BKT transition at T_ BKT<T_c, which is also crossed during the time-dependent protocol.Thus, disentangling the behaviors corresponding to BEC and BKT is quite hard in experimental or numerical analyses. We argue that the off-equilibrium behavior in the limit of large t_s can be described by an off-equilibrium FSS theory for bosonic gases confined within slab geometries, extending the TFSS of the equilibrium properties. To provide evidence of the dimensional-crossover scenario in interacting bosonic gases, we present a numerical study of the BH model (<ref>) in anisotropic slab-like lattices L^2× Z with Z≪ L. With decreasing T from the high-temperature normal phase, we first observe a quasi-BEC transition where the critical length scale ξ gets large, but it does not diverge, being limited by ξ∼ Z (keeping Z fixed).Then a BKT transition occurs to a QLRO phase, where the system develops planar critical correlations essentially described by the 2D Gaussian spin-wave theory.We show that the 3D→2D dimensional-crossover scenario explains the apparently complex dependence on T, Z, and L of the one-particle correlation functions and the corresponding length scale, when decreasing the temperature from T>T_c to T < T_ BKT<T_c.The results turn out to be consistent with the predictions of the TFSS at the BEC transition.The dimensional-crossover scenario is expected to apply to any quantum gas of interacting bosonic particles constrained in boxes or lattice structures with slab geometries.Analogous arguments apply to ^4He systems in film geometries <cit.>, and to 3D XY spin models defined in lattices with slab geometries <cit.>.We conclude stressing that the above issues related to the dimensional-crossover scenario are of experimental relevance since cold-atom systems confined within slab geometries can be effectively realized, see e.g. Refs. <cit.>.These experimental setups offer the possibility of investigating the dependence of the phase-coherence properties on the geometry of the cold-atom system. Our study provides a framework to interpret the experimental or numerical data related to the 3D→2D dimensional crossover in Bose gases confined within slab geometries, and in particular their complicated dependence on the thickness Z.99 CWK-02 E. A. Cornell and C. E. Wieman, Nobel Lecture: Bose-Einstein condensation in a dilute gas, the first 70 years and some recent experiments, Rev. Mod. Phys. 74, 875 (2002); N. Ketterle, Nobel lecture: When atoms behave as waves: Bose-Einstein condensation and the atom laser, Rev. Mod. Phys. 74, 1131 (2002). Andrews-etal-97 M. R. Andrews C. G. Townsend, H.-J. Miesner, D. S. Durfee, D. M. Kurn, and W. Ketterle, Observation of Interference Between Two Bose Condensates, Science 275, 637 (1997).Stenger-etal-99 J. Stenger, S. Inouye, A. P. Chikkatur, D. M. Stamper-Kurn, D. E. Pritchard, and W. Ketterle, Bragg Spectroscopy of a Bose-Einstein Condensate, Phys. Rev. Lett. 82, 4569 (1999).Hagley-etal-99 E. W. Hagley, L. Deng, M. Kozuma, M. Trippenbach, Y. B. Band, M. Edwards, M. Doery, P. S. Julienne, K. Helmerson, S. L. Rolston, and W. D. Phillips, Measurement of the coherence of a Bose-Einstein condensate, Phys. Rev. Lett. 83, 3112 (1999).BHE-00 I. Bloch, T.W. Hänsch, and T. Esslinger, Measurement of the spatial coherence of a trapped Bose gas at the phase transition, Nature 403, 166 (2000). Dettmer-etal-01 S. Dettmer, D. Hellweg, P. Ryytty, J. J. Arlt, W. Ertmer, K. Sengstock, D. S. Petrov, G. V. Shlyapnikov, H. Kreutzmann, L. Santos, and M. Lewenstein, Observation of Phase Fluctuations in elongated Bose-Einstein Condensates, Phys. Rev. Lett. 87, 160406 (2001). Hellweg-etal-02 D. Hellweg, S. Dettmer, P. Ryytty, J. J. Arlt, W. Ertmer, K. Sengstock, D. S. Petrov, G. V. Shlyapnikov, H. Kreutzmann, L. Santos, and M. Lewenstein, Phase Fluctuations in Bose-Einstein Condensates, Appl. Phys. B 73, 781 (2001). Hellweg-etal-03 D. Hellweg, L. Cacciapuoti, M. Kottke, T. Schulte, K. Sengstock, W. Ertmer, and J. J. Arlt, Measurement of the Spatial Correlation Function of Phase Fluctuating Bose-Einstein Condensates, Phys. Rev. Lett. 91, 010406 (2003).Ritter-etal-07 S. Ritter, A. Öttl, T. Donner, T. Bourdel, M. Köhl, and T. Esslinger, Observing the Formation of Long-Range Order during Bose-Einstein Condensation, Phys. Rev. Lett. 98, 090402 (2007).BDZ-08 I. Bloch, J. Dalibard, and W. Zwerger, Many-body physics with ultracold gases, Rev. Mod. Phys. 80, 885 (2008).DRBOKE-07 T. Donner, S. Ritter, T. Bourdel, A. Öttl, M. Köhl, and T. Esslinger, Critical behavior of a trapped interacting Bose gas, Science 315, 1556 (2007).DZZH-07 R.B. Diener, Q. Zhou, H. Zhai, and T.L. Ho, Criterion for Bosonic Superfluidity in an Optical Lattice, Phys. Rev. Lett. 98, 180404 (2007).BB-09 A. Bezett and P.B. Blakie, Critical properties of a trapped interacting Bose gas, Phys. Rev. A 79, 033611 (2009).CV-09 M. Campostrini and E. Vicari, Critical behavior and scaling in trapped systems, Phys. Rev. Lett. 102, 240601 (2009); (E) 103, 269901 (2009); M. Campostrini and E. Vicari, Trap-size scaling in confined particle systems at quantum transitions, Phys. Rev. A 81, 023606 (2010).ZKKT-09 Q. Zhou, Y. Kato, N. Kawashima, and N. Trivedi, Direct Mapping of the Finite Temperature Phase Diagram of Strongly Correlated Quantum Models, Phys. Rev. Lett. 103, 085701 (2009).Trotzky-etal-10 S. Trotzky, L. Pollet, F. Gerbier, U. Schnorrberger, I. Bloch, N.V. Prokofev, B. Svistunov, and M. Troyer, Suppression of the critical temperature for superfluidity near the Mott transition, Nat. Phys. 6, 998 (2010).HZ-10 T.-L. Ho and Q. Zhou, Obtaining the phase diagram and thermodynamic quantities of bulk systems from the densities of trapped gases, Nat. Phys. 6, 131 (2010).PPS-10 L. Pollet, N.V. Prokof'ev, and B.V. Svistunov,Criticality in Trapped Atomic Systems, Phys. Rev. Lett. 104, 245705 (2010).NNCS-10 S. Nascimbene, N. Nayon, F. Chevy, and C. Salomon, The equation of state of ultracold Bose and Fermi gases: a few examples, New J. Phys. 12, 103026 (2010).ZKKT-10 Q. Zhou, Y. Kato, N. Kawashima, and N. Trivedi, Direct Mapping of the Finite Temperature Phase Diagram of Strongly Correlated Quantum Models, Phys. Rev. Lett. 105, 199601 (2010).QSS-10 S.L.A. de Queiroz, R.R. dos Santos, and R.B. Stinchcombe, Finite-size scaling behavior in trapped systems, Phys. Rev. E 81, 051122 (2010). FCMCW-11 S. Fang, C-M. Chung, P-N. Ma, P. Chen, and D-W. Wang, Quantum criticality from in situ density imaging, Phys. Rev. A 83, 031605(R) (2011). HM-11 K. R. A. Hazzard and E. J. Mueller,Techniques to measure quantum criticality in cold atoms, Phys. Rev. A 84, 013604 (2011).Pollet-12 L. Pollet, Recent developments in quantum Monte Carlo simulations with applications for cold gases, Rep. Prog. Phys. 75, 094501 (2012).CR-12 J. Carrasquilla and M. Rigol, Superfluid to normal phase transition in strongly correlated bosons in two and three dimensions, Phys. Rev. A 86, 043629 (2012). CTV-13 G. Ceccarelli, C. Torrero, and E. Vicari, Critical parameters from trap-size scaling in trapped particle systems, Phys. Rev. B 87 024513 (2013).CN-14 G. Ceccarelli and J. Nespolo, Universal scaling of three-dimensional bosonic gases in a trapping potential, Phys. Rev. B 89, 054504 (2014).CCBDWNDB-14 L. Corman, L. Chomaz, T. Bienaimé, R. Desbuquois, C. Wettenberg, S. Nascimbene, J. Dalibard, and J. Beugnon,Quench-induced supercurrents in an annular Bose gas, Phys. Rev. Lett. 113, 135302 (2014).CNPV-15 G. Ceccarelli, J. Nespolo, A. Pelissetto, and E. Vicari, Bose-Einstein condensation and critical behavior of two-component bosonic gases, Phys. Rev. A 92, 043613 (2015); Phase diagram and critical behaviors of mixtures of Bose gases, Phys. Rev. A 93, 033647 (2016). NGSH-15 N. Navon, A. L. Gaunt, R. P. Smith, and Z. Hadzibabic, Critical Dynamics of Spontaneous Symmetry Breaking in a Homogeneous Bose gas, Science 347, 167 (2015). CCBDWNBD-15 L. Chomaz, L. Corman, T. Bienaimé, R. Desbuquois, C. Wettenberg, S. Nascimbene, J. Beugnon, and J. Dalibard, Emergence of coherence via transverse condensation in a uniform quasi-two-dimensional Bose gas, Nat. Commun. 6, 6162 (2015).DV-17 F. Delfino and E. Vicari, Critical behavior at the spatial boundary of a trapped inhomogeneous Bose-Einstein condensate, Phys. Rev. A 95, 053606 (2017). BN-17 J. Beugnon and N. Navon, Exploring the Kibble-Zurek mechanism with homogeneous Bose gases, J. Phys. B: At. Mol. Opt. Phys. 50, 022002 (2017). CDMV-16 G. Ceccarelli, F. Delfino, M. Mesiti, and E. Vicari, Shape dependence and anisotropic finite-size scaling of the phase coherence of three-dimensional Bose-Einstein condensed gases, Phys. Rev. A94, 053609 (2016).PSW-01 D. S. Petrov, G. V. Shlyapnikov, and J. T. M. Walraven, Phase-fluctuating 3D Bose-Einstein condensates in elongated traps, Phys. Rev. Lett. 87, 050404 (2001). Mathey-etal-10L. Mathey, A. Ramanathan, K. C. Wright, S. R. Muniz, W. D. Phillips, and C. W. Clark, Phase fluctuations in anisotropic Bose-Einstein condensates: From cigars to rings, Phys. Rev. A 82, 033607 (2010).GCP-12 D. Gallucci, S. P. Cockburn, and N. P. Proukakis, Phase coherence in quasicondensate experiments: An ab initio analysis via the stochastic Gross-Pitaevskii equation, Phys. Rev. A 86, 013627 (2012).RMHDTLK-13 W. RuGWay, A.G. Manning, S.S. Hodgman, R.G. Dall, A.G. Truscott, T. Lamberton, and K.V. Kheruntsyan, Observation of Transverse Bose-Einstein Condensation via Hanbury Brown-Twiss Correlations, Phys. Rev. Lett. 111, 093601 (2013).KT-73 J. M. Kosterlitz and D. J. Thouless, Ordering, metastability and phase transitions in two-dimensional systems, J. Phys. C: Solid State 6, 1181 (1973)B-72 V. L. Berezinskii, Destruction of Long-range Order in One-dimensional and Two-dimensional Systems having a Continuous Symmetry Group I. Classical Systems, Zh. Eksp. Theor. Fiz. 59, 907 (1970) [Sov. Phys. JETP 32, 493 (1971)].Kosterlitz-74 J. M. Kosterlitz,The critical properties of the two-dimensional xy model, J. Phys. C 7, 1046 (1974).JKKN-77 J. V. José, L. P. Kadanoff, S. Kirkpatrick, and D. R. Nelson, Renormalization, vortices, and symmetry-breaking perturbations in the two-dimensional planar model, Phys. Rev. B 16, 1217 (1977).HKCBD-06 Z. Hadzibabic, P. Krüger, M. Cheneau, B. Battelier, and J. Dalibard, Berezinskii-Kosterlitz-Thouless crossover in a trapped atomic gas, Nature 441, 1118 (2006).KHD-07 P. Krüger, Z. Hadzibabic, and J. Dalibard, Critical Point of an Interacting Two-Dimensional Atomic Bose Gas, Phys. Rev. Lett. 99, 040402 (2007).HKCRD-08 Z. Hadzibabic, P. Krüger, M. Cheneau, S. P. Rath, and J. Dalibard, The trapped two-dimensional Bose gas: from Bose-Einstein condensation to Berezinskii-Kosterlitz-Thouless physics, New J. Phys. 10, 045006 (2008).CRRHP-09 P. Cladé, C. Ryu, A. Ramanathan, K. Helmerson, and W. D. Phillips, Observation of a 2D Bose Gas: From Thermal to Quasicondensate to Superfluid, Phys. Rev. Lett. 102, 170401 (2009).HZGC-10 C.-L. Hung, X. Zhang, N. Gemelke, and C. Chin, Observation of scale invariance and universality in two-dimensional Bose gases, Nature 470, 236 (2011).Pl-etal-11 T. Plisson, B. Allard, M. Holzmann, G. Salomon, A. Aspect, P. Bouyer, and T. Bourdel, Coherence properties of a two-dimensional trapped Bose gas around the superfluid transition, Phys. Rev. A 84, 061606(R) (2011).Desb-etal-12 R. Desbuquois, L. Chomaz, T. Ysefsah, J. Léonard, J. Beugnon, C. Weitenberg, and J. Dalibard, Superfluid behaviour of a two-dimensional Bose gas, Nat. Phys. 8, 645 (2012).Barber-83 M. N. Barber, Finite-size scaling in Phase Transitions and Critical Phenomena, Vol. 8, eds. C. Domb abd J. L. Lebowitz (Academic Press, 1983).Privman-90 Finite Size Scaling and Numerical Simulations of Statistical Systems,ed. V. Privman (World Scientific, 1990). GKMD-08 F. M. Gasparini, M. O. Kimball, K. P. Mooney, and M. Diaz-Avilla, Finite-size scaling of ^4He at the superfluid transition, Rev. Mod. Phys. 80, 1009 (2008). SM-95 N. Schultka and E. Manousakis, Crossover from two- to three-dimensional behavior in superfluids, Phys. Rev. B 51, 11712 (1995).SM-96 N. Schultka and E. Manousakis, Scaling of superfluid density in superfluid films, J. Low Temp. Phys. 105, 3 (1996).SM-97 N. Schultka and E. Manousakis, Boundary effects in superfluid films, J. Low Temp. Phys. 109, 733 (1997).Hasenbusch-09 M. Hasenbusch, Kosterlitz-Thouless transition in thin films: A Monte Carlo study of three-dimensional lattice models, J. Stat. Mech.: Theory Expt.P02005 (2009). Kibble-76 T. W. B. Kibble, Topology of cosmic domains and strings, J. Phys. A 9, 1387 (1976).Zurek-85 W. H. Zurek, Cosmological experiments in superfluid helium?, Nature 317, 505 (1985). FWGF-89 M.P.A. Fisher, P.B. Weichman, G. Grinstein, and D.S. Fisher, Boson localization and the superfluid-insulator transition, Phys. Rev. B 40, 546 (1989).JBCGZ-98 D. Jaksch, C. Bruder, J.I. Cirac, C.W. Gardiner, and P. Zoller, Cold Bosonic Atoms in Optical Lattices, Phys. Rev. Lett. 81, 3108 (1998).CPS-07 B. Capogrosso-Sansone, N.V. Prokof'ev, and B.V. Svistunov, Phase diagram and thermodynamics of the three-dimensional Bose-Hubbard model, Phys. Rev. B 75, 134302 (2007).PV-02 A. Pelissetto and E. Vicari, Critical Phenomena and Renormalization Group Theory, Phys. Rep. 368, 549(2002).Lipa-etal-96 J.A. Lipa, D.R. Swanson, J.A. Nissen, T.C.P. Chui,and U.E. Israelsson,Heat Capacity and Thermal Relaxation of Bulk Helium very near the Lambda Point, Phys. Rev. Lett. 76, 944 (1996); J.A. Lipa, J.A. Nissen, D.A. Stricker, D.R. Swanson, and T.C.P. Chui, Specific heat of liquid helium in zero gravity very near the lambda point, Phys.Rev.B68, 174518 (2003).CHPV-06 M. Campostrini, M. Hasenbusch, A. Pelissetto, and E. Vicari, Theoretical estimates of the critical exponents of the superfluid transition in ^4He by lattice methods, Phys. Rev. B 74, 144506 (2006).BMPS-06 E. Burovski, J. Machta, N. Prokof'ev, and B. Svistunov, High-precision measurement of the thermal exponent for the three-dimensionalXY universality class, Phys. Rev. B 74, 132502 (2006).GZ-98 R. Guida and J. Zinn-Justin,Critical exponents of the N-vector model, J. Phys. A 31, 8103 (1998).KPSV-16 F. Kos, D. Poland, D. Simmons-Duffin, and A. Vichi Precision Islands in the Ising and O(N) Models, JHEP 08 (2016) 036. KP-17 M. V. Kompaniets and E. Panzer, Minimally subtracted six loop renormalization ofO(n)-symmetric φ^4 theory and critical exponents, arXiv:1705.06483.FBJ-73 M. E. Fisher, M. N. Barber, and D. Jasnow,Helicity Modulus, Superfluidity, and Scaling in Isotropic Systems, Phys. Rev. A 8, 1111 (1973).PC-87 E. L. Pollock and D. M. Ceperley,Path-integral computation of superfluid densities, Phys. Rev. B 36, 8343 (1987). MW-66 N.D. Mermin and H. Wagner, Absence of Ferromagnetism or Antiferromagnetism in One- or Two-Dimensional Isotropic Heisenberg Model, Phys. Rev. Lett. 17, 1133 (1966).H-67 P. C. Hohenberg,Existence of Long-Range Order in One and Two Dimensions, Phys. Rev. 158, 383 (1967). CNPV-13 G. Ceccarelli, J. Nespolo, A. Pelissetto, and E. Vicari, Universal behavior of two-dimensional bosonic gases atBerezinskii-Kosterlitz-Thouless transitions, Phys. Rev.B88, 024517 (2013). HK-97 K. Harada and N. Kawashima, Universal jump in the helicity modulus of the two-dimensional quantum XY model, Phys. Rev. B 55, R11949 (1997).Ding-92 H.-Q. Ding,Phase transition and thermodynamics of quantum XY model in two dimensions, Phys. Rev. B 45, 230 (1992).DM-90 H.-Q. Ding and M.S. Makivić, Kosterlitz-Thouless transition in the two-dimensional quantum XY model, Phys. Rev. B 42, 6827 (1990). Fisher-71 M. E. Fisher, Critical Phenomena, Proceedings of the International School of Physics Enrico Fermi, edited by M. S. Green (Academic, New York, 1971).CF-76 T. W. Capehart and M. E. Fisher, Susceptibility scaling functions for ferromagnetic Ising films, Phys. Rev. B 13, 5021 (1976).It-Dr-book C. Itzykson and J. M. Drouffe, Statistical Field Theory (Cambridge Univ. Press, Cambridge, 1989). CFT-book P. Di Francesco, P. Mathieu, and D. Senechal, Conformal Field Theory (Springer Verlag, New York, 1997). Hasenbusch-05 M. Hasenbusch, The two dimensional XY model at the transition temperature: a high precision numerical study, J. Phys. A 38, 5869 (2005).Gradstein I. S. Gradshteyn and I. M. Ryzhik,Table ofIntegrals, Series, and Products,edited by A. Jeffrey and D. Zwillinger, 7th edition(Academic Press, San Diego, 2007).PV-13 A. Pelissetto and E. Vicari, Renormalization-group flow and asymptotic behaviors at the Berezinskii-Kosterlitz-Thouless transitions, Phys. Rev. E 87, 032105 (2013). HPV-05 M. Hasenbusch, A. Pelissetto, and E. Vicari, Multicritical behavior in the fully frustrated XY model and related systems, J. Stat. Mech.: Theory Expt.P12002 (2005). AGG-80 D. J. Amit, Y. Y. Goldschmidt, and G. Grinstein, Renormalisation group analysis of the phase transition in the 2D Coulomb gas, Sine-Gordon theory and XY-model, J. Phys. A 13, 585 (1980).HMP-94 M. Hasenbusch, M. Marcu, and K. Pinn, High precision renormalization group study of the roughening transition, Physica A 208, 124 (1994).HP-97 M. Hasenbusch and K. Pinn, Computing the roughening transition of Ising and solid-on-solid models by BCSOS model matching, J. Phys. A 30, 63 (1997).Balog-01 J. Balog,Kosterlitz-Thouless theory and lattice artifacts, J. Phys. A 34, 5237 (2001).Hasenbusch-08 M. Hasenbusch, The Binder cumulant at the Kosterlitz-Thouless transition, J. Stat. Mech.: Theory Expt. P08003 (2008). Hasenbusch-12M. Hasenbusch,Thermodynamic Casimir effect: Universality and corrections to scaling, Phys. Rev. B 85, 174421 (2012).Binder-87 K. Binder, Theory of first-order phase transitions, Rep. Prog. Phys. 50, 783 (1987).Bray-94 A.J. Bray,Theory of phase-ordering kinetics, Adv. Phys. 43, 357 (1994).CG-05 P. Calabrese and A. Gambassi, Ageing Properties of Critical Systems, J. Phys. A 38, R133 (2005).GZHF-10 S. Gong, F. Zhong, X. Huang, and S. Fan, Finite-time scaling via linear driving, New J. Phys. 12, 043036 (2010).PSSV-11 A. Polkovnikov, K. Sengupta, A. Silva, and M. Vengalattore, Colloquium: Nonequilibrium dynamics of closed interacting quantum systems, Rev. Mod. Phys. 83, 863 (2011).CEGS-12 A. Chandran, A. Erez, S. S. Gubser, and S. L. Sondhi, Kibble-Zurek problem: Universality and the scaling limit, Phys. Rev. B 86, 064304 (2012).Braun-etal-15 S. Braun, M. Friesdorf, S.S. Hodgman, M. Schreiber, J.P. Ronzheimer, A. Riera, M. del Rey, I. Bloch, J. Eisert, and U. Schneider, Emergence of coherence and the dynamics of quantum phase transitions, PNAS 112, 3641 (2015).Biroli-15 G. Biroli, Slow Relaxations and Non-Equilibrium Dynamics in Classical and Quantum Systems, arXiv:1507.05858.PV-16 A. Pelissetto and E. Vicari, Off-equilibrium scaling behaviors driven by time-dependent external fields in three-dimensional O(N) vector models, Phys. Rev. E 93, 032141 (2016).DWGGP-16 M. J. Davis, T. M. Wright, T. Gasenzer, S. A. Gardiner, and N. P. Proukakis, Formation of Bose-Einstein condensates, arXiv:1601.06197.ARBBHC-16 M. Anquez, B.A. Robbins, H.M. Bharath, M.J. Boguslawski, T.M. Hoang,and M.S. Chapman, Kibble-Zurek Mechanism in a Spin-1 Bose-Einstein Condensate, Phys. Rev. Lett. 116, 155301 (2016).PV-17 A. Pelissetto and E. Vicari, Dynamic off-equilibrium transition in systems slowly driven across thermal first-order transitions, Phys. Rev. Lett. 118, 030602 (2017).HH-77 P. C. Hohenberg and B. I. Halperin, Theory of dynamic critical phenomena, Rev. Mod. Phys. 49, 435 (1977).FM-06 R. Folk and G. Moser,Critical dynamics: A field-theoretical approach,J. Phys. A 39, R207 (2006). SHLVS-06 L. E. Sadler, J. M. Higbie, S. R. Leslie, M. Vengalattore, and D. M. StamperKurn, Spontaneous symmetry breaking in a quenched ferromagnetic spinor Bose-Einstein consensate, Nature 443, 312 (2006).WNSBD-08 C. N. Weiler, T. W. Neely, D. R. Scherer, A. S. Bradley, M. J. Davis, and B. P. Anderson, Spontaneous vortices in the formation of Bose-Eisteincondensates, Nature 455, 948 (2008).LDSDF-13 G. Lamporesi, S. Donadello, S. Serafini, F. Dalfovo, and G. Ferrari, Spontaneous creation of Kibble–-Zurek solitons in a Bose-–Einstein condensate, Nat. Phys. 9, 656 (2013). vDK-97 N.J. van Druten and W. Ketterle, Two-step condensation of the ideal Bose gas in highly anisotropic traps, Phys. Rev. Lett. 79, 549 (1997).AJKB-11 J. Armjio, T. Jacqmin, K. Kheruntsyan, and I. Bouchoule, Mapping out the quasicondensate transition through the dimensional crossover from one to three dimensions, Phys. Rev. A 83, 021605 (2011). SK-91 A. W. Sandvik and J. Kurlijärvi, Quantum Monte Carlo simulation method for spin systems, Phys. Rev. B 43, 5950 (1991).SS-02 O. F. Syljuåsen and A. W. Sandvik, Quantum Monte Carlo with directed loops, Phys. Rev. E 66, 046701 (2002).DT-01 A. Dorneich and M. Troyer, Accessing the dynamics of large many-particle systems using the stochastic series expansion, Phys. Rev. E 64, 066701 (2001).
http://arxiv.org/abs/1706.08304v1
{ "authors": [ "Francesco Delfino", "Ettore Vicari" ], "categories": [ "cond-mat.quant-gas", "hep-lat" ], "primary_category": "cond-mat.quant-gas", "published": "20170626100746", "title": "Dimensional crossover of Bose-Einstein condensation phenomena in quantum gases confined within slab geometries" }
Daniele Garzoni, Università degli Studi di Padova, Dipartimento di Matematica “Tullio Levi-Civita” [email protected] The invariably generating graph of the alternating and symmetric groups Daniele Garzoni Received 26th June 2017 / Accepted 25th January 2018 ======================================================================= Given a finite group G, the invariably generating graph of G is defined as the undirected graph in which the vertices are the nontrivial conjugacy classes of G, and two classes are adjacent if and only if they invariably generate G. In this paper we study this object for alternating and symmetric groups. The main result of the paper states that, if we remove the isolated vertices from the graph, the resulting graph is connected and has diameter at most 6. § INTRODUCTIONGiven a finite group G and a subset {x_1, …, x_t} of G, we say that {x_1, …, x_t} invariably generates G if x_1^g_1, …, x_t^g_t=G for every g_1, …, g_t ∈ G. This concept was introduced by Dixon with motivations from computational Galois theory: see <cit.> for details. Note that invariable generation can be thought of as a property of conjugacy classes, rather than individual elements. §.§ The invariably generating graphGiven a finite group G, we define the invariably generating graph Λ(G) of G as follows. The vertices are the conjugacy classes of G different from {1}, and two vertices x^G and y^G are adjacent if and only if {x,y} invariably generates G. The purpose of this paper is to initiate the study of this object for finite (almost) simple groups, and more precisely, for alternating and symmetric groups. It was proved by Kantor–Lubotzky–Shalev <cit.>, and Guralnick–Malle <cit.> independently, that finite simple groups are invariably generated by two elements, so that in this case Λ(G) is nonempty. It is also known that S_n is invariably generated by two elements if and only if n≠ 6 (see <cit.>). Set𝒫 ={q^d-1/q-1: q is a prime power and d≥ 2}. The first two results of the paper are the following.Assume n≥ 5, and let G∈{A_n,S_n}. Then, Λ(G) does not have isolated vertices if and only if G=A_n and n is a prime satisfying n ∉𝒫∪{11,23} and n≡ -1 mod 12. * Let (G_i) be a sequence of alternating or symmetric groups such that |G_i|→∞. Assume that for every i, G_i is not alternating of prime degree. Then the number of isolated vertices of Λ(G_i) tends to infinity.* Assume n is a prime not contained in 𝒫∪{11,23}. Then the number of isolated vertices of Λ(A_n) is at most 2.In Theorem <ref> we assumed n≥ 5. We will keep this assumption throughout the paper; see Lemma <ref> for the remaining cases.For the sake of clarity, we mention that Λ(A_11) and Λ(A_23) have 5 and 6 isolated vertices, respectively (Lemma <ref>).The only case not addressed in Theorem <ref> is G=A_n with n∈𝒫 prime. In Remark <ref> we will obtain a partial result, dealing with the case n=(q^d-1)/(q-1) with d→∞.Once Theorems <ref> and <ref> are proved, one may ask what happens if the isolated vertices are removed from Λ(G). With this purpose, we define a graph Ξ(G) which is obtained from Λ(G) by removing the isolated vertices. The next result states that, except in case G=S_6, this graph is connected with bounded diameter (we already recalled that S_6 is not invariably generated by two elements, hence Ξ(S_6) is the null graph).Assume n≥ 5 and let G∈{A_n,S_n}, with G≠ S_6.Then, Ξ(G) is connected with diameter at most 6. In many cases we prove better estimates on the diameter.Assume n≥ 5. If G=S_n and n is odd, or if G=A_n and n is even, then d(Ξ(G))≤ 4.If G=A_n and n∉𝒫 is prime, then d(Ξ(G))≤ 3.The proofs of Theorems <ref> and <ref> rely on some recent results, proved in <cit.> and <cit.>, which classify the primitive subgroups of S_n containing elements having certain cycle types. These results depend on the Classification of Finite Simple Groups.The upper bound in Theorem <ref> is attained, since d(Ξ(S_8))=6 (Lemma <ref>). However, we conjecture that this can happen only for finitely many groups.Let G ∈{A_n,S_n}. If n is sufficiently large then d(Ξ(G)) ≤ 4. In Section <ref> we obtain a partial result towards a proof of this conjecture. It is interesting to observe that in this partial result we use the Prime Number Theorem, but we do not use the CFSG. See also Lemma <ref>, which establishes that d(Ξ(S_n))≥ 4 for all primes n≥ 7, so the bound in Conjecture <ref>, if true, is attained infinitely often.Theorem <ref> suggests a natural question for all finite simple groups. Let G be a finite simple group. Is the graph Ξ(G) connected?§.§ Some contextThe invariably generating graph is the analogue, for invariable generation, of the so called generating graph Γ(G) of a finite group G. This is defined as follows. The vertices are the nonidentity elements of G, and two vertices x and y are adjacent if and only if x,y=G.Many properties of generation of a finite simple group by two elements can be stated in terms of the generating graph. Guralnick and Kantor <cit.> proved that if G is a finite simple group, then for every 1 ≠ x ∈ G, there exists y ∈ G such that x,y=G. Later, Breuer, Guralnick and Kantor <cit.> showed that if 1 ≠ x_1, x_2 ∈ G, then there exists y ∈ G such that x_1, y= x_2,y=G. These properties can be stated respectively as follows: Γ(G) has no isolated vertices, and it is connected with diameter at most 2.Theorems <ref> and <ref> say that, for alternating and symmetric groups, the invariably generating graph has quite different properties: it usually has isolated vertices, and the number of isolated vertices usually grows as the order of the group grows.On the other hand, Theorem <ref> says that, if we remove the isolated vertices, we obtain a graph which in some sense shares similarities with the generating graph Γ(G).§.§ An alternative definitionIn light of Theorem <ref>, one could ask how the proportion of isolated vertices of Λ(G) behaves as |G| tends to infinity. The elementary approach used in the proof of Theorem <ref> is not sufficient to address this problem. Here however one comment is in order. We have chosen the vertices of Λ(G) to be the nontrivial conjugacy classes of G. One could define a graph Λ_e(G) in which the vertices are the nontrivial elements of G, and two vertices are adjacent if and only if they invariably generate G. Of course, the property of connectedness, and the value of the diameter, are the same in the two graphs. However, when one counts the edges, or the vertices having certain properties, the situation can radically change. Indeed, in Λ_e(G) there is a dependence on the size of the conjugacy classes which does not exist in Λ(G).Probabilistic invariable generation has always been considered in terms of elements (cf. <cit.>, <cit.>, <cit.>, <cit.>). Still, we believe it is worth exploring the problem of counting conjugacy classes — although in this paper we do not address any question of this kind. § NOTATION AND PRELIMINARY RESULTS In this section we fix some notation, and we gather some preliminary lemmas and observations that we will use throughout the paper.§.§ NotationThe vertices of our graphs are conjugacy classes of alternating and symmetric groups. We will identify conjugacy classes of S_n with their cycle type, i.e., we will represent conjugacy classes of S_n as partitions of n. We now introduce some terminology about partitions.Let H ≤ S_n, and let 𝔭 be a partition of n. We will say thatbelongs to H, or thatis contained in H, if H contains elements with cycle type . If C is the conjugacy class of S_n corresponding to , this is equivalent to the condition H ∩ C ≠∅.Of course, this condition depends only on the S_n-conjugacy class of the subgroup H. Therefore, in the above terminology we are allowed, if we wish, to replace H by its conjugacy class, and to say thatbelongs to the S_n-conjugacy class of H.Let _1 and _2 be two partitions of n. If _1 and _2 both belong to H, we will say that _1 and _2 share H.When ∈ S_i × S_n-i, with 1 ⩽ i < n, we will say that i is partial sum in . This is indeed equivalent to the condition that the integer i can be written as the sum of some parts of .In a partition,i^m will mean m parts of length i. Therefore, (a_1^m_1, a_2^m_2, …, a_t^m_t) will mean m_1 parts of length a_1, m_2 parts of length a_2, …, and m_t parts of length a_t.Occasionally, given a partitionand a positive integer i, we will write ^i to denote “the i-th power of ”, namely the partition obtained by replacing each part of length ℓ by d parts of length ℓ/d, where d=(ℓ,i). Note that if x∈ S_n has cycle type , then x^i has cycle type ^i.Finally, we define𝒫 ={q^d-1/q-1: q is a prime power and d≥ 2}.§.§ Maximal overgroups of certain elementsMost of the arguments will rely heavily on the knowledge of the maximal overgroups of certain elements in S_n: specifically, cycles, or elements having few orbits in the natural action on n points. The intransitive maximal subgroups are easily determined. For convenience, we now isolate some elementary observations regarding transitive imprimitive subgroups, while then moving to the more difficult case of primitive subgroups. We use some of the language introduced in the previous subsection. The following two lemmas are consequence of <cit.>.Let n be a natural number, m be a nontrivial divisor of n and 1 ⩽ i <n. The partition (a_1, a_2) belongs to S_m ≀ S_n/m if and only if either m divides a_1 or n/m divides a_1.If (a_1, a_2) belongs to S_m ≀ S_n/m, the induced permutation on the blocks has at most two cycles. If it has two cycles, then m divides a_1. If it is an (n/m)-cycle, then n/m divides a_1. The converse implication is proved in the same way. Let n be a natural number, m be a nontrivial divisor of n and 1 ⩽ i <n. The partition (a_1, a_2, a_3) belongs to S_m ≀ S_n/m if and only if one of the following conditions is satisfied: (a) m divides a_i for every i.(b) n/m divides a_i for every i.(c) there exist 1 ⩽ t < n/m and i ≠ j ∈{1,2,3} such that a_k=tb_k for k=i,j, with b_i+b_j = m. Similar to the previous lemma. In case (a), the induced permutation on the blocks has cycle type (a_1/m, a_2/m, a_3/m). In case (b), it is an (n/m)-cycle. In case (c), it has cycle type (t,a_ℓ/m), where ℓ∈{1,2,3}∖{i,j}. We now move to primitive subgroups. Our main tool is a theorem which classifies the primitive subgroups of S_n containing a cycle, and which relies on the CFSG. This should be seen as a generalization of a classical theorem of Jordan (see e.g. <cit.>) stating that there are no proper primitive subgroups of S_n different from A_n containing a cycle of prime length fixing at least 3 points. Since we will apply this result several times, for convenience we report here the statement. <cit.>Let G be a primitive permutation group of finite degree n, not containing the alternating group A_n. Suppose that G contains a cycle fixing k points, where 0 ⩽ k ⩽ n-2. Then one of the following holds: * k = 0 and either (a) C_p ⩽ G ⩽_1(p) with n = p prime, or(b) _d(q) ⩽ G ⩽ PΓ L_d(q) with n = (q^d-1)/(q-1) and d ⩾ 2 for some prime power q, or(c) G = PSL_2(11), M_11 or M_23 with n = 11, 11 or 23 respectively. * k = 1 and either (a) _d(q) ⩽ G ⩽ AΓ L_d(q) with n = q^d and d ⩾ 1 for some prime power q, or(b) G = _2(p) or _2(p) with n = p + 1 for some prime p ⩾ 5, or(c) G = M_11, M_12 or M_24 with n = 12, 12 or 24 respectively. * k = 2 and _2(q) ⩽ G ⩽ PΓ L_2(q) with n = q + 1 for some prime power q.Note that the statement implies that there are no proper primitive subgroups of S_n different from A_n containing a cycle fixing at least 3 points, generalizing indeed Jordan's theorem. We note the following immediate consequence.Assume x ∈ S_n is such that a suitable power of x is a nontrivial cycle fixing at least 3 points. Then, x does not lie in proper primitive subgroups of S_n different from A_n.We also mention that we will make essential use of the main result from <cit.>, which classifies the primitive subgroups of S_n containing an element having at most 4 cycles.We will shortly apply the previous results to certain elements (or partitions) of particular interest to us. §.§ Conjugacy classes of A_nIf a partition of n is made of distinct odd parts, then the corresponding S_n-conjugacy class splits into two A_n-conjugacy classes (and viceversa), giving rise to two vertices of Λ(A_n). Often, this does not represent a serious change: the following technical lemmas give conditions under which the two vertices may be essentially thought of as a unique vertex.Let x ∈ A_n, and let H ⩽ S_n be such that H^A_n=H^S_n. Then, H contains elements belonging to x^A_n if and only if it contains elements belonging to (x')^A_n for every x'∈ x^S_n.Assume x^z ∈ H with z ∈ A_n, and assume x'=x^zg with g ∈ S_n. Then x'=x^zg∈ H^g=H^h for some h ∈ A_n by hypothesis, hence (x')^h^-1∈ H. This concludes the proof. Assume x,y ∈ A_n are such that (x')^A_n is adjacent to (y')^A_n in Λ(A_n) for any x' ∈ x^S_n and for any y' ∈ y^S_n. Under these assumptions, we say with slight abuse of notation that x^S_n is adjacent to y^S_n in Λ(A_n). This notation will be convenient, as we will represent S_n-conjugacy classes as partitions, and we will be allowed to say that “a partition 𝔮 is adjacent to a partition ”, rather than “any A_n-conjugacy class of elements with cycle type 𝔮 is adjacent to any A_n-conjugacy classes of elements with cycle type ”. We will now see that in many cases the assumption of Notation <ref> is satisfied.Let x∈ A_n, and assume x^A_n= x^S_n. Let y ∈ A_n. Then, x^A_n is adjacent in Λ(A_n) to y^A_n if and only if x^S_n and y^S_n are adjacent (in the terminology of Notation <ref>).Choose x' ∈ x^S_n. We show first that if y^A_n is not adjacent to x^A_n, then y^A_n is not adjacent to (x')^A_n. By assumption we can write x'=(x^i)^z for some integer i and for some z ∈ A_n. Again by assumption, {x^g_1, y^g_2}⊆ H for some g_1, g_2 ∈ A_n, and for some proper subgroup H of A_n. Then (x')^z^-1g_1 = (x^i)^g_1 = (x^g_1)^i ∈ H, whence {(x')^z^-1g_1, y^g_2}⊆ H and y^A_n is not adjacent to (x')^A_n, as required. Assume now y'∈ y^S_n. We show that if y^A_n is not adjacent to x^A_n, then (y')^A_n is not adjacent to x^A_n. This will conclude the proof. We may assume y^A_n≠ (y')^A_n and x^A_n≠ x^S_n, otherwise the statement is easy. Choose x' ∈ x^S_n∖ x^A_n. By the previous paragraph, if y^A_n is not adjacent to x^A_n then it is not adjacent to (x')^A_n, so that {x^g_1, y^g_2}⊆ H for some proper subgroup H of A_n and for some g_1 ∈ S_n∖ A_n and g_2 ∈ A_n. Then {x, y^g_2g_1^-1}⊆ H^g_1^-1, and since y^g_2g_1^-1 is A_n-conjugate of y', the proof is concluded. We now apply the previous considerations to certain elements and subgroups of S_n.Let H be a maximal subgroup of S_n which is either intransitive, or transitive and imprimitive. Then H^A_n=H^S_n.We may assume n ≥ 3. We have that H^A_n=H^S_n if and only if N_S_n(H)A_n. In our case, HA_n, since every maximal intransitive or imprimitive subgroup of S_n contains transpositions. Assume x ∈ A_n belongs to no proper primitive subgroup of S_n different from A_n. Let y ∈ A_n. Then, x^A_n and y^A_n are adjacent in Λ(A_n) if and only if x^S_n and y^S_n are adjacent (in the terminology of Notation <ref>).By assumption, x lies in no proper primitive subgroups different from A_n. Therefore, x^A_n is adjacent to y^A_n if and only if, for every g_1,g_2 ∈ A_n, x^g_1, y^g_2 is primitive or, equivalently, is not contained in intransitive or imprimitive maximal subgroups. By Lemmas <ref> and <ref>, this condition depends only on the cycle type of the elements, rather than on their A_n-conjugacy class. The lemma follows. Let 2 ≤ i ≤ n/2 be coprime with n. Then, (i,n-i) does not belong to proper primitive subgroups different from A_n, and does not belong to transitive imprimitive subgroups.The fact that (i,n-i) does not lie in imprimitive subgroups follows from Lemma <ref>. Regarding primitive subgroups, (i,n-i)^n-i=(i, 1^n-i) ≠ (1^n), and n-i ≥ 3 (the conditions on i imply n ≥ 5), hence the statement follows from Corollary <ref>. Theorem <ref> can be used to generalize the previous lemma to the case i=1: one just needs to take care of some specific examples of primitive subgroups. The following easy lemma will be used with this purpose.Let q be a prime power and d be a positive integer. * An element of _d(q), in the natural action on q^d points, either is a derangement, or fixes a number of points equal to q^s for some 0 ≤ s ≤ d.* Assume g∈ PΓ L_2(q) fixes at least 3 points in the natural action on q+1 points. Then, g fixes a number of points having the same parity of q+1. Moreover, if g∈_2(q) then g=1. (1) If g ∈_d(q) fixes some point, we may assume that it fixes 0. Hence, g ∈GL_d(q). Now just observe that the set of fixed points of an element of GL_d(q) is an 𝐅_q-subspace of 𝐅_q^d.(2) Consider _2(q) acting (on the right) on the set Ω of 1-dimensional subspaces of 𝐅_q^2. Write q=p^r with p prime. For ϕ∈Gal(𝐅_q/𝐅_p), denote by f_ϕ the permutation of Ω induced by the mapping (λ_1,λ_2)↦ (λ_1^ϕ, λ_2^ϕ) of 𝐅_q^2. Then, we may express each element g∈_2(q) as g=xf_ϕ, where x∈PGL_2(q) and ϕ∈Gal(𝐅_q/𝐅_p).Note that _2(q) is 3-transitive on Ω. Hence, if g=x f_ϕ∈_2(q) fixes at least 3 points, we may assume that it fixes (1,0), (0,1) and (1,1). It follows that x=1 and g=f_ϕ. Then g fixes p^ℓ+1 points, where ℓ is a divisor of r; in particular, it fixes a number of points having the same parity of q+1. Moreover, if g ∈_2(q) then f_ϕ=1. The lemma is proved.Assume n is either an odd prime, or n≡ 3 mod 4. Assume x ∈ A_n is an n-cycle. Then, x^A_n= x^S_n.If n≥ 3 is an odd integer, then an n-cycle is normalized by elements having cycle type (1,2^(n-1)/2). If n is an odd prime, then an n-cycle is normalized by an (n-1)-cycle. In particular, if x is as in the statement, then N_S_n( x)A_n, from which x^A_n= x^S_n. We deduce another consequence of the previous lemmas.Assume n ≥ 4, and let x ∈ A_n. Then, for every g ∈ S_n, x^A_n and (x^g)^A_n are not adjacent in Λ(A_n).By Lemma <ref>, we may assume that x is not contained in intransitive or imprimitive subgroups. It follows that n is prime and x is an n-cycle. The statement follows then by Lemma <ref>. This lemma suggests a natural question for all finite simple groups. Let G be a finite simple group. Let x ∈ G, and let σ∈Aut(G). Is it possible that {x,x^σ} invariably generates G? It is easy to check that the statement is true for G=A_6, in which case S_6 has index two in Aut(G). This, together with Lemma <ref>, implies that the question has a negative answer for G=A_n.§.§ Small degreesThe main theorems are stated for n≥ 5. For completeness, we address here the cases of degree n≤ 4. We assume n≥ 3 in order to avoid trivialities. Let 3≤ n ≤ 4, and assume G∈{A_n,S_n}. Then, Λ(G) has isolated vertices if and only if G=S_4. Moreover, Ξ(G) is connected with diameter at most 2.This is an easy check. The graphs Ξ(A_3), Ξ(S_3) and Ξ(S_4) have diameter 1, while Ξ(A_4) has diameter 2: the two classes of 3-cycles are connected by a path of length 2 passing through the class (2^2). In Λ(S_4), the vertices corresponding to (1^2,2) and (2^2) are isolated. § PROOFSIn this section we prove the theorems stated in the introduction. §.§ Proof of Theorems <ref> and <ref> We begin with a lemma which proves Theorem <ref>(2) and the “if” part of Theorem <ref>.Let n ≥ 5 be an odd prime. Then, (1,2^(n-1)/2) and (1,3^(n-1)/3) are isolated in Λ(A_n) (when they make sense and are even permutations). If n∉𝒫∪{11,23}, then there are no other isolated vertices in Λ(A_n).The two mentioned vertices are not adjacent to (n) (this terminology makes sense by Lemmas <ref>, <ref> and Notation <ref>) because they are contained in _1(n). Therefore, they might be adjacent only to some A_n-conjugacy class with cycle type (a_1, …, a_t), with t ⩾ 3. Every 1 ⩽ i ⩽ (n-1)/2 is a partial sum in (1,2^(n-1)/2), so this vertex is isolated (recall Lemmas <ref> and <ref>), and the values of i that are not partial sum in (1,3^(n-1)/3) are exactly those that satisfy i ≡ 2 mod 3. If a_1, a_2 ≡ 2 mod 3 then a_1+a_2 ≡ 1 mod 3, so one among a_1, a_2 and a_1+a_2 is partial sum. We conclude that (1,3^(n-1)/3) is indeed isolated.Assume now n∉𝒫∪{11,23}. Let x^A_n be a vertex of Λ(A_n) different from the two above: we want to show it is not isolated. If x ∉_1(n), we deduce from Theorem <ref> that x^A_n is adjacent to (n). On the other hand, if x ∈_1(n), then either it is an n-cycle, or it has cycle type (1,t^(n-1)/t), with t ⩾ 4. As just remarked, a class of n-cycles is not isolated. If t ⩾ 5, by Corollary <ref> x^A_n is adjacent to (2^2,n-4). If t=4, for the same reason x^A_n is adjacent to (3^2,n-6). For the sake of clarity, we deal with the cases n=11,23.The graphs Λ(A_11) and Λ(A_23) have 5 and 6 isolated vertices, respectively.Recall Theorem <ref>. Let n∈{11,23}. Let 1 ≠ x∈ A_n. If x does not belong to M_11 (for n=11) and M_23 (for n=23), the same argument as in the previous lemma shows that x^A_n is adjacent to (n) (note that PSL_2(11) can be embedded in M_11). Inspection (using for instance GAP) shows that if n=11 and 1 ≠ x∈ M_11, then x^A_n belongs to{(1^3,2^4), (1^3,4^2), (1^2,3^3), (2,3,6), (1,2,8)}.These are not adjacent to (n) because of M_11. By looking at partial sums, we see that these are all isolated. If n=23 and 1 ≠ x∈ M_23, then either x^A_n is adjacent to one between (2^2,19) and (3^2, 17), or x^A_n belongs to{(1^5,3^6), (1^3,5^4), (1^7,2^8), (1,2^2,3^2,6^2), (1^3,2^2,4^4), (1,2,4,8^2)}.These are all isolated for the same reason as above. We are now ready to prove Theorems <ref> and <ref>. In the proof, we will use Lemmas <ref>, <ref> and <ref> with no further mention. Recall also Notation <ref>.We begin with Theorem <ref>. Item (2) follows from Lemma <ref>, hence we focus on item (1). Let G=A_n or S_n, with n nonprime if G=A_n. For every n, we will define I_n, a subset of the set of isolated vertices of Λ(G), whose cardinality, as n →∞, goes to infinity. This will prove Theorem <ref>. For every m, denote by p_m^E the set of all partitions of m different from (1^m) which correspond to even permutation.We first assume that, whenever n is odd, then G=S_n. Define I_n as the set of all partitions of n of the form (1^n/2, ), whereis whatsoever partition of p_n/2^E.Let ∈ I_n. Every 1 ⩽ i ⩽ n/2 is partial sum, sois not adjacent to classes of elements with at least two cycles. If n is even, thenis not adjacent to (n) since n/2 is partial sum in , hence ∈ S_n/2≀ S_2. If n is odd, then G=S_n, andis not adjacent to (n) sincecorresponds to even permutations. Therefore, I_n consist of isolated vertices. Clearly, the size of I_n goes to infinity as n →∞.Assume now that G=A_n and n is odd nonprime. Fix n, and let p_n be the smallest prime divisor of n. Define I_n as the set of all partitions of n of the form (1^n(p_n-1)/p_n, ), whereis any partition of p_n/p_n^E.If ∈ I_n then ∈ S_n/p_n≀ S_p_n: the first p_n-1 blocks are fixed pointwise. Therefore,is not adjacent to (n). Moreover, every 1 ⩽ i ⩽ n/2 is partial sum, sois not adjacent to classes of elements having at least 2 cycles. It follows thatis isolated. Note now that n/p_n ⩾√(n), hence the size of I_n goes to infinity as n→∞. This concludes the proof of Theorem <ref>.We now move to Theorem <ref>. Note that Lemma <ref> proves the “if” part. We now prove the “only if” part. Assume first n is prime and G=A_n. If n ≢-1 mod 12, by Lemma <ref> there are isolated vertices in Λ(A_n). The cases n=11,23 have been considered in Lemma <ref>. If n=(q^d-1)/(q-1), let x be any involution lying in a subgroup of S_n conjugate to _d(q). The fact that n is odd implies that every 1 ≤ i ≤ n/2 is partial sum in (the cycle type of) x, hence x^A_n might only be adjacent to a class of n-cycles. However, this does not happen because of the containment in _d(q). Therefore x^A_n is isolated.The case G=A_n with n prime is therefore proved. For the remaining cases, we apply what proved for Theorem <ref>. We define =(1^n-3,3). This partition belongs to I_n, as defined in this proof, hence it is an isolated vertex of Λ(G). This concludes the proof. The unique case not discussed in Theorem <ref> is the case G=A_n and n∈𝒫 prime. We obtain here the following partial result: if n_i=(q^d_i-1)/(q-1) is prime and d_i→∞, then the number of isolated vertices of Λ(A_n_i) tends to infinity. Note that since n_i is prime, d_i must be prime. To establish whether infinitely many such primes (i.e., primes d such that (q^d-1)/(q-1) is prime for some prime power q) do actually exist, however, is a hard open problem in number theory: see for instance <cit.>.Assume first n=(q^d-1)/(q-1) with q odd (and d odd). The action of _d(q) on the 1-dimensional subspaces of 𝐅_q^d gives an embedding _d(q) < S_n. For every 1 ≤ r < d/2, let x=x(r) be a diagonal matrix of GL_d(q) with 1's and -1's on the diagonal, and assume the number of -1's is r. Let x denote the image of x in _d(q) < S_n. Then, x has (q^r+q^d-r-2)/(q-1) fixed points. In particular, any two distinct 1 ≤ r < d/2 give rise to elements of S_n which have a different number of fixed points, and which therefore belong to different S_n-conjugacy classes. It is easy to check that the element x arising in this way belongs to A_n. The number of possibilities for r in order to obtain such an element is (d-1)/2. As remarked in previous proof, ( x)^A_n is isolated in Λ(A_n); therefore the number of isolated vertices of Λ(A_n) is at least (d-1)/2.Assume now q is even. For every 1≤ℓ≤ d/2, consider a unipotent element x=x(ℓ) of SL_d(q) with ℓ Jordan blocks of size 2, and with the other Jordan blocks of size 1. This is an involution of SL_d(q). Denote again by x the image of x in _d(q)< S_n. Then x has (q^d-ℓ-1)/(q-1) fixed points, hence any two distinct 1 ≤ℓ≤ d/2 give rise to elements belonging to different S_n-conjugacy classes. Moreover x ∈ A_n (unless (q,d)=(2,2), but recall we are assuming n≥ 5), and ( x)^A_n is isolated in Λ(A_n). Hence the number of isolated vertices of Λ(A_n) is at least (d-1)/2.§.§ Proof of Theorems <ref> and <ref>In this subsection we prove Theorems <ref> and <ref>. One brief comment about the terminology we will adopt. The proofs will begin with a sentence of the type “Letbe a vertex of Ξ(G)”, without any preliminary consideration showing that Ξ(G) is not the null graph. However, along the proof suitable edges will be exhibited in Λ(G), so that the initial choice ofwill be licit. (In other words we are saying that, although we will not state it explicitly, in the proofs it will be shown that the groups are invariably generated by two elements.)We begin by proving Theorem <ref> (in two separate results).Assume n≥ 5 is a prime and n∉𝒫. Then Ξ(A_n) is connected and d(Ξ(A_n))≤ 3.Let x ∈ A_n, and assume x^A_n is a vertex of Ξ(A_n). If n ≠ 11, 23, in the proof of Lemma <ref> we showed that x^A_n is adjacent to one among (n), (2^2,n-4) and (3^2,n-6) (recall Lemma <ref>). These vertices are pairwise adjacent by Corollary <ref>, hence we have indeed d(Ξ(A_n)) ≤ 3. If n=11 or 23, we observed in the proof of Lemma <ref> that a class inside M_11 or M_23 is either isolated, or adjacent to one between (2^2,n-4) and (3^2,n-6). Therefore, by Corollary <ref> also in this case d(Ξ(A_n)) ≤ 3. For a later use, we point out that the same estimate holds for the groups A_13 and A_17.Assume n=13=(3^3-1)/(3-1). Inspection shows that every class lying in _3(3) is either isolated or adjacent to (3^3, 7), hence d(Ξ(A_13)) ≤ 3 by the same argument as in the proof of the previous theorem. Assume finally n=17=16+1. Here every class lying in _2(16) is either isolated or adjacent to one between (3^2,11) and (4^2,9). It is easy to deduce that d(Ξ(A_17)) ≤ 3.Let n≥ 5 be an integer.* If n is odd then Ξ(S_n) is connected and d(Ξ(S_n)) ⩽ 4.* If n is even then Ξ(A_n) is connected and d(Ξ(A_n)) ⩽ 4. (1) Letbe a vertex of Ξ(S_n). Assumeis not adjacent to (i,n-i) for any 2 ⩽ i ⩽ n/2 coprime with n. Then, by Lemma <ref> every such i is partial sum in . Sinceis a vertex of Ξ(S_n),will be adjacent to some vertex . Necessarily, no 2 ⩽ i ⩽ n/2 coprime with n is partial sum in . Hence, again by Lemma <ref>,is adjacent to (i,n-i) for every such i.Now note that the (i,n-i)'s, with i as above, are pairwise adjacent. From this it follows that, for any fixed 2 ≤ i ≤ n/2 coprime with n, any vertexof Ξ(S_n) has distance at most 2 from (i,n-i). This concludes the proof.(2) The statement for n=6 can be checked explicitly, hence we assume n≥ 8. Then the proof is identical to (1). Recall Lemmas <ref> to<ref>, and Notation <ref>. Now we move to the proof of the general case, i.e., Theorem <ref>. We are left with symmetric groups of even degree and alternating groups of odd degree.In Theorem <ref>, the strategy was to look for edges with conjugacy classes of elements having two cycles. This approach is not available anymore. Indeed, in alternating groups of odd degree, elements with two cycles do not exist; and in symmetric groups of even degree, such elements belong to A_n, hence one must take care of the parity of elements when dealing with generation. For these elementary reasons, our strategy will be to look for edges with elements having three cycles. This is where we will make use of <cit.> which, as already mentioned in the introduction, classifies the primitive permutation groups having elements with at most four cycles. Let n ≥ 8 be an even integer. Then Ξ(S_n) is connected and d(Ξ(S_n)) ≤ 6. We first assume n≥ 12, and consider the remaining cases at the end of the proof. Letandbe two vertices of Ξ(S_n) joined by an edge. One of the two, say , must correspond to odd permutations. Assumeis not adjacent to (i,n-i) for any 1 ⩽ i ⩽ n/2 coprime with n. By Lemma <ref>, every such i, with i ≠ 1, is partial sum in . We now show that also i=1 is partial sum in . Assume this is not the case. Sinceis not adjacent to (1,n-1), we deduce from Lemma <ref> and Theorem <ref> thatis contained in one between _d(2), _2(p), M_11, M_12, M_24. The last three are excluded because they are subgroups of A_n. Assume ∈_d(2) or _2(p). Since every 3 ⩽ i ⩽ n/2 coprime with n is partial sum,must have odd parts: let a be the smallest such part. Since i=1 is not partial sum, we have a ≥ 3. On the other hand, by assumptioncorresponds to odd permutations, hence it has even parts. It follows that (1^n) ≠^a fixes a number of points which is greater or equal to 3, and which is a multiple of a. This contradicts Lemma <ref>. Therefore, i=1 is indeed partial sum in .Now we divide the cases n ≡ 2 mod 4 andn ≡ 0 mod 4.Assume n ≡ 2 mod 4. Then, n/2-2 is coprime with n, hence it is partial sum in . Write =(a_1, …, a_t), and assume n/2-2= ∑_i=1^h a_i. If a_k=1 for some k ∈{1, …, h}, then n/2-3 is partial sum in . Otherwise, n/2-1 is partial sum in . We show thatis adjacent in the first case to 𝔞_1=(1,n/2-3,n/2+2), and in the second case to 𝔞_2=(1,n/2-2,n/2+1).By the considerations above,and 𝔞_i do not share intransitive subgroups. Moreover, 𝔞_1 and 𝔞_2 correspond to odd permutations, and belong to no transitive imprimitive subgroups by Lemma <ref>. Finally, 𝔞_1 and 𝔞_2 belong to no core-free primitive subgroups by <cit.>. We have therefore our desired edge betweenand 𝔞_1 or 𝔞_2.Assume now n ≡ 0 mod 4. We employ the same argument as above, with n/2-2 replaced by n/2-1. The same reasoninglead us to look for an edge betweenand 𝔟_1=(1,n/2-2,n/2+1) or 𝔟_2=(1,n/2,n/2-1). Again,and 𝔟_i do not share intransitive subgroups. It follows from <cit.> that 𝔟_1 and 𝔟_2 are not contained in core-free primitive subgroups (for 𝔟_2 we may also use Corollary <ref>). Regarding maximal transitive imprimitive subgroups, we only have that 𝔟_2 is contained in S_n/2≀ S_2. However, by construction we consider 𝔟_2 only when n/2 is partial sum in , so thatbelongs to S_n/2≀ S_2. Sinceandare adjacent, we deduce thatis not contained in S_n/2≀ S_2. Therefore we have an edge betweenand 𝔟_1 or 𝔟_2.Now we deduce the connectedness of Ξ(S_n) and the bound to the diameter. The considerations above imply that an edge with 𝔞_i and 𝔟_i concerns only intransitive subgroups (i.e., partial sums), except for 𝔟_2, where one has to deal also with S_n/2≀ S_2.Assume first n ≡ 2 mod 4. The argument given above shows that every vertex of Ξ(S_n) has distance at most 2 from one among 𝔞_1, 𝔞_2 and (i,n-i) for some 1 ≤ i ≤ n/2 coprime with n. Hence, in order to conclude it is sufficient to show that these vertices have pairwise distance at most 2. For n = 14, this can be checked directly. Assume then n > 14. We show that all these vertices are adjacent to (2^2,n-4), which clearly concludes the proof. For all the vertices except (1,n-1), this follows from Lemma <ref> and from the considerations of the previous paragraph. For (1,n-1), by Theorem <ref> we need to exclude the sharing of _d(p), _2(p), M_24. The last is contained in A_n, while (2^2, n-4) is not. Moreover n ≡ 2 mod 4, hence n is not a power of 2 and we do not have affine subgroups. Finally, (1^4, n-4)=(2^2,n-4)^2 fixes at least 4 points, hence it does not belong to _2(p) by Lemma <ref>(2). This concludes the proof in case n ≡ 2 mod 4.Assume now n ≡ 0 mod 4. We assume first n >12. As in case n ≡ 2 mod 4, in order to conclude it is sufficient to prove that the vertices 𝔟_1, 𝔟_2 and (i,n-i) with 1 ≤ i ≤ n/2 coprime with n have pairwise distance at most 2. The vertices 𝔟_1, 𝔟_2 and (i,n-i) with 1 ⩽ i ⩽ n/2 coprime with n and i ≠ 3,5 are adjacent to (2,3,n-5). The vertices (i,n-i) with 1 ⩽ i ⩽ n/2 coprime with n and i ≠ 1, n/2-1 are adjacent to both 𝔟_1 and 𝔟_2. The vertices (i,n-i) with 2≤ i≤ n/2 coprime with n are adjacent to (2^2,n-4). By <cit.> (which in the affine case relies on <cit.>) we deduce that (2^2,n-4) is not contained in affine subgroups, hence also (1,n-1) is adjacent to (2^2,n-4). These considerations imply indeed that the vertices have pairwise distance at most 2.Consider now the case n=12. The argument of the previous paragraph does not work, and we need more detailed inspection. Letandbe as at the beginning of the proof, withcorresponding to odd permutations, and such that 1 and 5 are partial sum in . If 2 is not partial sum and 4 is partial sum then it is easy to deduce =(1,4,7). If 2 and 4 are not partial sums then =(1,5,6). Assume now 2 is partial sum. If 3 is not partial sum then it is easy to check thatmust have four cycles, false. If 3 is partial sum and 4 is not partial sum, then 5 cannot be partial sum: false. If 4 is partial sum, thenis isolated unless 6 is not partial sum andis adjacent to (12). With this more detailed information, it is not difficult to deduce d(Ξ(S_12))≤ 6. The proof of the theorem for n≥ 12 is now concluded. In the next lemma we consider the case n=8. The case n=10 can be dealt with similarly and we omit the details.We compute the exact diameter of Ξ(S_8): this shows that the upper bound in Theorem <ref> can be attained. The graph Ξ(S_8) is connected with diameter 6.In Figure <ref> we have drawn the graph Ξ(S_8). The group S_8 has 21 nontrivial conjugacy classes; one can compute explicitly the neighborhood of each of them in Λ(S_8). We can save some computations in view of the following observations. Whenever 4 is a partial sum in a partition , thenis not adjacent to partitions having only parts of even length, because of the sharing of S_4≀ S_2. It follows that if in a partitionthe integers 1,3 and 4 are partial sums, thenis isolated. This implies that the set of vertices of Ξ(S_8) is a subset of A:={(1^3,5),(1^2,6),(1,2,5),(1,7),(3,5),(2,3^2),(4^2),(2,6),(2^4),(2^2,4),(8)}. Note also that partitions which, for every odd integer ℓ, have an even number (possibly zero) of parts of length ℓ, are not adjacent to partitions having only parts of even length, because of S_2≀ S_4. We observe finally that the only core-free maximal primitive subgroups of S_8 are _3(2) and _2(7) (up to conjugation). Among the partitions in A, _3(2) contains (1,7), (4^2), (2,6), (2^4), and _2(7) contains (1^2,6), (1,7), (4^2), (2^4), (8). It is now easy to draw the graph.The last case to consider is alternating groups of odd degree.Let n ⩾ 5 be an odd integer. Then, Ξ(A_n) is connected and d(Ξ(A_n)) ⩽ 6. The cases n=5,7,9 can be checked explicitly; we omit the details and assume n≥ 11. The cases n=11,13,17,19 have been considered in Theorem <ref> and in the comments following it. Therefore we need to consider the cases n=15 and n ≥ 21. We first assume n ≥ 21, and deal with the case n=15 at the end of the proof. Throughout the proof, recall Lemmas <ref> and <ref>.Let x, y ∈ A_n, and assume x^A_n is adjacent to y^A_n in Ξ(A_n). We will show that x^A_n or y^A_n is adjacent to at least one among (1^2,n-2), (2^2,n-4), every class with cycle type (1,3,n-4), (1,4,n-5) and (2,8,n-10). This will show that every vertex of Ξ(A_n) has distance at most 2 from one of these vertices. The argument given in the next two paragraphs shows that all these vertices are adjacent to (6^2,n-12). It will follow that every vertex of Ξ(A_n) has distance at most 3 from (6^2,n-12), which clearly will conclude the proof.Let us analyze the classes (1^2,n-2), (2^2,n-4), (1,3,n-4), (1,4,n-5) and (2,8,n-10). By Lemma <ref>, only (1,4,n-5) and (2,8,n-10) belong to some maximal transitive imprimitive subgroup: they are contained in S_5 ≀ S_n/5. Regarding maximal core-free primitive subgroups, by Theorem <ref> (1^2,n-2) belongs only to _2(q) with n=q+1. Moreover, (2^2,n-4), (2,8,n-10) and (1,3,n-4) belong to no core-free primitive subgroups: the first two by Corollary <ref>, the last by <cit.>. Again by <cit.>, (1,4,n-5) belongs possibly only to _d(5).Now consider the class (6^2,n-12). The maximal transitive imprimitive subgroups it is contained in are S_3 ≀ S_n/3 and S_n/3≀ S_3. Moreover, by Lemma <ref> it belongs neither to _m(p) nor to _2(q). Therefore, as claimed (1^2,n-2), (2^2,n-4), every class with cycle type (1,3,n-4), (1,4,n-5) and (2,8,n-10) are adjacent to (6^2,n-12).Hence, in order to conclude the proof it is sufficient to prove the initial claim, i.e., to prove that x^A_n or y^A_n is adjacent to at least one among (1^2,n-2), (2^2,n-4), every class with cycle type (1,3,n-4), (1,4,n-5) and (2,8,n-10). In a previous paragraph we determined the maximal overgroups of these classes. In the following, we will freely use this information with no further mention.Denote by c(x) and c(y) the cycle types of x and y respectively. Assume there exists z ∈{ x,y} such that 1 and 2 are not partial sums in c(z); without loss of generality, z=x. Then, by Theorem <ref> either x^A_n is adjacent to (1^2,n-2), or x ∈_2(q). If moreover 4 is not partial sum in c(x) then x^A_n is adjacent to (2^2,n-4). Assume then that 4 is partial sum in c(x). The unique possibility is c(x)=(4, …), from which 1 ≠ x^4 fixes an even number of points greater than 3, from which x ∉_2(q) by Lemma <ref>(2).Therefore, we assume (without loss of generality) that 1 is partial sum in c(x) and 2 is partial sum in c(y), that is, c(x)=(1, …) and c(y)=(2, …). Then, either x^A_n is adjacent to (2^2,n-4), or 4 is partial sum in c(x). In the latter case we have c(x)=(1,3,…) or c(x)=(1,4,…). If c(x)=(1,3,…) then y^A_n is adjacent to (1,3,n-4). If c(x)=(1,4,…) then y^A_n is adjacent to (1,4,n-5) unless y is contained in _d(5) or S_5 ≀ S_n/5. The option y ∈_d(5) is excluded by Lemma <ref>(1) because y^2 fixes 2 points. If y ∈ S_5 ≀ S_n/5, since 5 is partial sum in c(x), the unique possibility for the 2-cycle is to act as a 2-cycle on the blocks, from which c(y)=(2,8,…). At this point, since certainly x ∉ S_5 ≀ S_n/5, x^A_n is adjacent to (2,8,n-10). This concludes the proof in case n ≥ 21.There remains the case n=15.Let x, y ∈ A_n, and assume x^A_n is adjacent to y^A_n in Ξ(A_n). We prove that x^A_n or y^A_n is adjacent to at least one between (1^2,13), (1,7^2) and (15) (this makes sense by Lemmas <ref> and <ref>). By Theorem <ref> and Lemma <ref>, (1^2,13) and (15) are adjacent. Moreover (1,7^2) and (15) are both adjacent to (2^2,11); and (1,7^2) and (1^2,13) are both adjacent to (3^2,9). It follows that (1^2,13), (1,7^2) and (15) have pairwise distance at most 2, from which indeed d(Ξ(A_15)≤ 6.By Theorem <ref>, if x^A_n and y^A_n are not adjacent to (1^2,13) then (without loss of generality) c(x)=(1,…) and c(y)=(2, …). Assume now x^A_n and y^A_n are not adjacent to (15). We consider the various possibilities. Notice that x ∉ S_3 ≀ S_5, because otherwise 2 would be partial sum in c(x). Direct inspection (using for instance GAP) shows that if x ∈_4(2) then it must be c(x)=(1,7^2); hence we may assume this is not the case. Therefore, by Theorem <ref>, it must be x ∈ S_5 ≀ S_3, and y ∈ S_3 ≀ S_5 or y ∈_4(2). Since x ∈ S_5 ≀ S_3, we have c(x)=(1,4,…), hence 4 is not partial sum in c(y). Inspection immediately implies y ∉_4(2), from which y ∈ S_3 ≀ S_5. In c(y), the 2-cycle either acts trivially on the blocks, or acts as a 2-cycle on the blocks. In the first case, 1 is partial sum in c(y), and in the second case, 4 is partial sum in c(y). In both cases we get a contradiction, and the proof is finished. Now the proof of Theorem <ref> and Theorem <ref> follows immediately from Theorems <ref>, <ref>, <ref> and <ref>.§ SOME COMMENTS ON CONJECTURE <REF> Conjecture <ref> states that, if G ∈{A_n,S_n}, then up to finitely many exceptions one has d(Ξ(G)) ≤ 4. Here we reduce this conjecture to the following one (and in fact to something much weaker: see Remark <ref>). Let G ∈{A_n,S_n}. There exists an absolute constant c>0 such that if x^G is a vertex of Ξ(G), then x^G is adjacent to a class which has at most c cycles. A way to think about this is that, since x^G is a vertex of Ξ(G), by definition x^G is adjacent to some other class y^G. It seems conceivable that, summing the parts of the cycle type of y in a suitable way, one obtains that x^G is indeed adjacent to some z^G, where z has a bounded number of cycles. In fact, we believe that the value of c should be rather small, say at most 4: by <cit.>, only “few” core-free primitive subgroups contain elements having at most 4 cycles. We now record a consequence of the Prime Number Theorem.Fix ξ > 0. Denote by π(n) the number of primes less or equal to n. Then, π((1+ξ)n)-π(n) is asymptotic to ξ n/ln n.The Prime Number Theorem states that π(n) is asymptotic to n/ln n, hence the statement follows from an easy computation.Conjecture <ref> implies Conjecture <ref>. It is sufficient to show that, for n large, vertices which have at most c cycles have pairwise distance at most 2 in Ξ(G). Let x^G and y^G be two such vertices, and denote by c(x) and c(y) the cycle type of x and y, respectively.We first claim that, if n is sufficiently large, then there exist distinct prime numbers p and r such that: (a) n/3 < p,r ≤ n/2,(b) p,r and p+r are not partial sum in c(x) and in c(y),(c) p, r and p+r do not divide n.Let us prove the claim. By Theorem <ref>, the number of primes contained in the interval (n/3,n/2] is asymptotic n/6ln n. The number of divisors d(n) of n is much smaller: it is known that d(n)=o(n^) for every fixed >0 (cf. <cit.>).Notice now that there are at most 2^c+1=O(1) integers i such that i is partial sum in at least one between c(x) and c(y).The claim now follows, just because the number of primes in the interval (n/3,n/2] is much larger than all the other quantities considered above, hence among all possibilities for p and r we certainly find one satisfying (b) and (c).At this point we conclude the proof. If G=A_n and n is even, or G=S_n and n is odd, then both x^G and y^G are adjacent to (p,n-p). Indeed, by (c) (p,n-p) is not contained in transitive imprimitive subgroups, and a power of (p,n-p) is a p-cycle, hence does not lie in core-free primitive subgroups by a classical theorem of Jordan (<cit.>). Assume now G=A_n with n odd, or G=S_n with n even. Then we claim that both x^G and y^G are adjacent to every class with cycle type (p,r,n-p-r). It is easy to deduce from Lemma <ref> and from (c) above that (p,r,n-p-r) does not belong to transitive imprimitive subgroups of S_n. Moreover, since min{p,r} > n/3 we have n-p-r < min{p,r}. It follows that a power of (p,r,n-p-r) is a p-cycle, hence does not belong to core-free primitive subgroups by Jordan's theorem.We have shown that x^G and y^G have distance at most 2 in Ξ(G), and the proof is concluded. In the proof of the previous theorem, we have used a much weaker hypothesis than the validity of Conjecture <ref>. Indeed, the same argument works provided each vertex x^G of Ξ(G) is adjacent to a vertex y^G such that the number of integers i which are partial sum in (the cycle type of) y is at most δ n/log n for some explicit fixed constant δ. This statement seems more suitable for a combinatorial proof. Any proof of this sort would very likely avoid the use of the CFSG. We conclude with a lemma providing a lower bound to d(Ξ(G)) in some cases.Assume n ≥ 7 is prime. Then, d(Ξ(S_n)) ≥ 4.The strategy is to define two partitionsandsuch thatis adjacent in Ξ(S_n) only to (n), andis adjacent in Ξ(S_n) only to (1,n-1). Since (n) and (1,n-1) are not adjacent because of the sharing of _1(n), this will prove indeed that d(Ξ(S_n)) ≥ 4.Define =(1^(n-1)/2, (n+1)/2) if n ≡ 3 mod 4 and =(1^(n+1)/2,(n-1)/2) if n ≡ 1 mod 4. Note thatcorresponds to odd permutations. Moreover, every 1 ≤ i ≤ n/2 is a partial sum in , henceis not adjacent to partitions having at least 2 parts. Finally,is adjacent to (n) by Theorem <ref>.Now define =(2^(n-3)/2,3) if n ≡ 3 mod 4 and =(2^(n-9)/2, 3^3) if n ≡ 1 mod 4. Note thatcorresponds to even permutations, so it is not adjacent to (n). It is easy to check that every 2 ⩽ i ⩽ n/2 is partial sum in , henceis adjacent to nothing different from (1,n-1). Finally, since ∉_1(n), we deduce by Theorem <ref> thatis indeed adjacent to (1,n-1). This concludes the proof of the lemma. Similar methods should suffice to give sensible lower bounds to d(Ξ(G)) in all the various cases. However, the details would become slightly technical, as one would need to specialize the argument depending on the arithmetic of n. alpha GAP4
http://arxiv.org/abs/1706.08423v2
{ "authors": [ "Daniele Garzoni" ], "categories": [ "math.GR", "20B30, 20B35" ], "primary_category": "math.GR", "published": "20170626150056", "title": "The invariably generating graph of the alternating and symmetric groups" }
theoremTheorem definitionDefinition assumptionAssumption propositionProposition lemmaLemma corollaryCorollary exampleExample remarkRemark
http://arxiv.org/abs/1706.08693v1
{ "authors": [ "Francesca Parise", "Asuman Ozdaglar" ], "categories": [ "cs.GT" ], "primary_category": "cs.GT", "published": "20170627070921", "title": "Sensitivity analysis for network aggregative games" }
Institut de Ciències de l'Espai (IEEC-CSIC), Campus UAB, Carrer de Can Magrans, S/N E-08193, Cerdanyola del Vallès, Catalunya, Spain [email protected], Osservatorio Astrofisico di Arcetri, Largo Enrico Fermi 5, I-50125 Firenze, Italy Department of Physics and Astronomy, University College London, London WC1E 6BT, UK Univ. Grenoble Alpes, CNRS, IPAG, F-38000 Grenoble, FranceINAF, Istituto di Astrofisica e Planetologia Spaziali, via Fosso del Cavaliere 100, I-00133 Roma, ItalyL1157-B1 is the brightest shocked region of the large-scale molecular outflow. It is considered the prototype of the so-called chemically rich active outflows, being the perfect laboratory to study how shocks affect the molecular gas content. Specifically, several deuterated molecules have previously been detected with the IRAM 30 m telescope, most of them formed on grain mantles and then released into the gas phase due to the passage of the shock.We aim to observationally investigatethe role of the different chemical processes at work that lead to formation of the DCN and compare it with HDCO,the two deuterated molecules imaged with an interferometer, and test the predictions of the chemical models for their formation.We performed high-angular-resolution observations toward L1157-B1 with the IRAM NOEMA interferometer of the DCN (2–1) and H^13CN (2–1) lines to compute the deuterated fraction, D_ frac(HCN), and compare it with previously reported D_ frac of other molecular species.We detected emission of DCN (2–1) and(2–1) arising from L1157-B1 shock. The deuterated fraction (HCN) is ∼4×10^-3 and given the associated uncertainties, we did not find significant variations across the bow-shock structure. Contrary to HDCO, whose emission delineates the region of impact between the fast jet and the ambient material, DCN is more widespread and not limited to the impact region. This is consistent with the idea that gas-phase chemistry is playing a major rolein the deuteration of HCN in the head of the bow-shock, where HDCO is undetected as it is a product of grain-surface chemistry. The spectra of DCN andmatch the spectral signature of the outflow cavity walls, suggesting that their emission results from shocked gas. The analysis of the time-dependent gas-grain chemical model UCL_CHEM coupled with a parametric C-type shock model shows that the observed deuterated fraction (HCN) is reached during the post-shockphase, when the gas is at T=80 K, matching the dynamical timescale of the B1 shock, around ∼1100 years.Our results indicate that the presence of DCN in L1157-B1 is a combination of gas-phase chemistry that produces the widespread DCN emission, dominating especially in the head of the bow-shock, and sputtering from grain mantles toward the jet impact region, that can be efficient close to the brightest DCN clumps B1a. The L1157-B1 astrochemical laboratory: testing the origin of DCNBased on observations carried out with the IRAM NOEMA interferometer. IRAM is supported by INSU/CNRS (France), MPG (Germany), and IGN (Spain).The fits files of DCN (2–1) and(2–1) datacubes are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A// G. Busquet 1 F. Fontani2 S. Viti3 C. Codella2 B. Lefloch4 M. Benedettini5C. Ceccarelli4Received ; accepted==============================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTION Protostellar shocks play a crucial role in the chemical evolution of star-forming clouds because they induce large variations of temperature and density in the surrounding medium, which can locally activate endothermic gas-phase reactions, ionization processes, and evaporation/erosion of dust grains and their icy mantles, tremendously increasing the chemical complexity of the ambient material <cit.>.Driven by the low-mass Class 0 protostar L1157-mm, at a distance of 250 pc <cit.>, L1157 is the prototypical “chemically rich” outflow <cit.>.It is associated with several shock episodes <cit.>, and is considered one of the best astrochemical laboratories. Its brightest bow-shock L1157-B1, located in the southern blue-shifted outflow lobe,is currently under extensive investigation from the millimeter to the infrared regime as part of the Large Programs Herschel/CHESS (Chemical HErschel Surveys of Star forming regions[http://chess.obs.ujf-grenoble.fr]; ) and IRAM 30 m/ASAI (Astrochemical Survey At IRAM[ http://www.oan.es/asai]; Lefloch2017 in preparation), as well as with interferometers such as NOEMA <cit.> and the JVLA (Busquet et al. in preparation). These observations confirm a spectacular chemical richness and a complex morphology of the bow-shock in which different molecules peak at different positions<cit.>.Using the IRAM 30 m telescope, <cit.> detected toward L1157-B1 several rotational lines of deuterated molecules ( HDO, DCN, HDCO, NH_2D, and CH_2DOH). The comparison between the predictions of the gas-grain chemical model <cit.> and the observational results led <cit.> to conclude that the deuterated fraction ((X)=ratio between the column density of a deuterated molecule and that of its main isotopolog X) of , H_2CO, andis consistent with material formed on multi-layer icygrain mantles and then released into the gas phase after evaporation of part of the grain mantles' ices due to the passage of the shock, while HCN is likely a present-day gas-phase product.While in cold and dense environments, deuterium enrichment occurs through the exothermic reaction H_3^+ + HD → H_2D^+ + + 232 K <cit.>, at temperatures above 20 K, the reverse reaction becomes important and theenhancement of deuterated fraction, and in particular the production of DCN, can proceed via CH_3^+ + HD → CH_2D^+ ++ 654 K <cit.>, as suggested by chemical models <cit.> and observations of the Orion Bar <cit.>.In protostellar shocks, as in L1157-B1, the gas-phase chemical composition can be additionally affected by evaporation and/or sputtering of icy dust mantles that release back into the gas-phase deuterated species (among others) that were formed earlier, either on the gas-phase and then depleted onto dust grain or formed directly on dust grain surfaces, during the cold cloud collapse phase. Recent high-angular-resolution observations with NOEMA reveal that the emission of HDCO clearly delineates the impact region between the shock and the ambient material<cit.>, confirming the predictions of previous works <cit.>. Moreover, <cit.> find significant changes of (H_2CO) in L1157-B1 shock; (H_2CO)≃0.1 in the HDCO-emitting region and it drops by one order of magnitude in the material in front of the impact region ( in the head of the bow-shock). The differences in (H_2CO) reflect the dominant process of formation/destruction of HDCO in the different positions of L1157-B1: surface (cold) chemistry, which favors the formation of HDCO; warm gas-phase chemistry, which destroys HDCO, dominates in the head of the bow-shock.In this work we report on observations of DCN (2–1) and(2–1) lines conducted with the IRAM NOEMA interferometer toward L1157-B1 to fully confirm the predictions of the chemical models and investigate whether DCN has a different origin than HDCO: warm gas-phase chemistry versus surface chemistry.§ OBSERVATIONS The NOEMA interferometer was used to observe the DCN (2–1) and H^13CN (2–1) molecular transitions at 144.828 GHz and 172.678 GHz, respectively, toward L1157-B1.The observations were carried out over several days between 2014 December and 2015 April using the array in the D and C configurations.The projected baselines range from 20.7 m to 176 m for DCN (2–1) and from 15.8 m to 176 m for(2–1). The phase center was α(J2000)=20^ h39^ m10^ s.2; δ(J2000)=6801'105, and the local standard of rest velocity was set to 2.6 . The primary beam (FWHM) is 348 and 292 at the frequency of DCN (2–1) and(2–1), respectively.Typical system temperatures were 100-150 K at 145 GHz and 250-300 K at 172 GHz, and the amount of precipitable water vapor was around 5 mm at 145 GHz and 1-2 mm at 172 GHz. The DCN (2–1) and(2–1) lines were observed using two spectral windows of the narrow band correlator of 40 MHz of bandwidth with 512 spectral channels, providing a spectral resolution of ∼0.078 MHz (∼0.15 ). Bandpass calibration was performed by observing quasar 3C 279, while 1926+611 and 1928+738 were used for calibration of the gains in phase and amplitude. The uncertainty on the phase and amplitude of the gains are around 10- 20 and 1-2 % at 145 GHz and 20- 40 and 5 % at 172 GHz. The absolute flux density scale was determined from MWC 349 with an uncertainty ∼15 %.Calibration and imaging were conducted using standard procedures of the CLIC and MAPPING softwares of the GILDAS[The GILDAS software is developed at the IRAM and the Observatories de Grenoble, and is available at http://www.iram.fr/IRAMFR/GILDAS] package.The final data cubes were smoothed to a velocity resolution of 0.5 .The synthesized beam of DCN (2–1) and(2–1) is 235×218 (P.A.=43.5) and 174×171 (P.A.=66.4), and the rms noise level achieved was 3.6 m and 8.8 m per spectral channel, respectively.The lines detected in the other spectral units of the narrow band correlator as well as in the Widex broadband correlator will be presented in a forthcoming paper.§ RESULTS We detected(2–1) and DCN (2–1) lines emitting in a range of velocities from -14.9 to 6.6  and from -7.4 to 5.1 , respectively, clearly blueshifted with respect to the cloud systemic velocity =2.6 <cit.>. The integrated intensity maps are shown in Fig. <ref> (left panel). The emission of DCN andpresents a clumpy morphology, with the strongest clumps located at the eastern wall of the cavity excavated by the shock, similarly to HCN (1–0) distribution <cit.>. There is also faint and extended emission associated with the head of bow-shock and toward the western side of the cavity walls coinciding with the B1b clump identified in some molecular species ( : , CH_3OH: , CH_3CHO: ).The brightest clump of the DCN (2–1) line is found at α(J2000)=20^ h39^ m10^ s.3; δ(J2000)=6801'141, about 2” north of clump B1a identified by ). This clump (B1a) is the peak position of the high-velocity SiO (2–1) emission<cit.>, indicating that B1a is the location where the precessing jet impacts the cavity.We note that the brightest peak in(2–1), named B1e, does not coincide with any previous identified clump by <cit.>.In the right panel of Fig. <ref> we present a comparison of the two deuterated molecules observed so far with interferometers, DCN (2–1) in contours and HDCO (2_1,1-1_0,1) in gray scale, overlaid on the CH_3CN emission <cit.> to highlight the bow-shock structure. Overall, the emission of DCN resembles that of HDCO except that it also arises from the head of the bow-shock, where HDCO is undetected <cit.>.We estimated how much flux is filtered out by NOEMA by comparing the IRAM 30 m spectra from the ASAI project (Mendozain preparation)of both lines with the NOEMA spectra extracted within a region corresponding to the beam of the single-dish ( 174 and 146 for DCN (2–1) and(2–1), respectively). We converted the IRAM 30 m spectra from main beam temperature (T_mb) to flux density units (F_ν), assuming that the telescope beam is Gaussian and the source size is smaller than the beam, using the following expression: [F_ν/Jy]=8.33×10^-7[ν/GHz]^2[Θ_mb/arcsec]^2[T_mb/K],where Θ_mb is the half power beam width (HPBW) of the IRAM 30 m telescope and ν the line rest frequency.The resulting spectra are displayed in Fig. <ref>. Within the calibration errors, NOEMA recovers around 85% of the flux detected with the IRAM 30 m telescope in DCN (2–1) and almost the total flux in(2–1). Therefore, our estimates ofare not affected by the missing flux.§.§ Deuterated fractionIn order to derivewe first convolved the(2–1) channel maps to the same beam as the DCN (2–1) line and then we extracted the spectra of DCN (2–1) and(2–1) toward the five subregions drawn in Fig. <ref>. The spectra of each subregion are displayed in Fig. <ref>. We report in Table <ref> the integrated intensity from -7.4  to +5.1  ( the velocity range where DCN emits) and the peak velocity of DCN (2–1) and(2–1) lines. It should be noted that the peak velocity of(2–1) appears blueshifted by ∼2  with respect to the DCN (2–1) line.From the integrated intensities we obtained the column densities of DCN (2–1) and(2–1) using Equation A4 of <cit.>, which assumes that all levels are characterized by the same excitation temperature, , and lines are optically thin. Given the low abundance of deuterated molecules, the assumption of optically thin DCN emission is reasonable. For the case of(2–1) we ran RADEX <cit.> over a wide range of physical conditions(=10-70 K, n()=10^3-10^7 , and N()=10^11-10^13 ) and found that the emission of(2–1) is always optically thin.We adoptedto be in the range of 10-70 K, based on both single-dish measurements of kinetic temperatures from CO and HDCO observations <cit.> and interferometric CH_3CN observations <cit.>. The molecular spectroscopy information was obtained from the Cologne Database for Molecular Spectroscopy (CDMS[https://www.astro.uni-koeln.de/cdms],). To estimate the deuterated fraction (HCN)=N(DCN)/N(HCN) we adopted ^12C/^13C=77 <cit.>. The derived values are listed in the last column of Table <ref>. Overall, (HCN) ranges from 3×10^-3 to 6×10^-3, in agreement with the values reported by <cit.> based on IRAM 30 m single-dish observations. The higher values are found toward the “E-wall” and B1b clump and it decreases by a factor of 2 in the rest of the shocked region.On the other hand, the lower value of (HCN)≃0.8×10^-3 is reached toward clump B1e, that is,in the head of the bow-shock, as it is the brightest clump inbut marginally detected in DCN. Considering the uncertainties of the derived column densities, we affirm that the deuterated fraction of HCN, (HCN), does not have significant variations among the different parts of the bow-shock structure. Conversely, <cit.> find a significant variation ofderived from H_2CO, which is about (H_2CO)≃0.1 in the emitting region (in the rear part of the bow-shock) and drops one order of magnitude in the head of the bow-shock. §.§ The spectral signature of DCN andPrevious studies have revealed the presence of multiple excitation components coexisting in the L1157-B1 shock <cit.>. Specifically, <cit.> showed that the line profiles of the CO J- ladder (from J=1 up to J=16) are well reproduced by a linear combination of three exponential laws I(v)∝ exp(-|v/v_0|), where _0 defines a characteristic velocity, specific for each physical component of the outflow. These three components were tentatively identified as the jet impact shock region associated with a partly-dissociative J-type shock (labeled g_1), the cavity walls of the L1157-B1 bow-shock (labeled g_2), and the cavity walls from the earlier ejection episode that produced the B2 bow-shock (labeled g_3).We extracted the spectra of DCN (2–1) and(2–1) over all the emitting region to search for the presence of the spectral signature. Figure <ref> shows that both DCN (2–1) and(2–1) are well described by an exponential law with _0=4.4 , consistent with the analysis of lines observed with the IRAM 30 m telescope as part of the ASAI survey (Mendozain preparation).This slope corresponds to the spectral signature of the outflow cavity of L1157-B1 ( the g_2 component). The presence of this component has been identified not only in CO but in other molecular lines such as CS <cit.>, H_2CS <cit.>, and PN <cit.>. The association of(2–1) and DCN (2–1) lines with the spectral signature of the outflow cavity walls supports the idea that both lines arise from gas that has been shocked.§ ANALYSIS AND DISCUSSION The results presented in the previous section indicate that all the positions of the L1157-B1 shock show similar (HCN)≃(3-6)×10^-3 except in the external walls of the cavity close to the head of the bow-shock ( in the B1e clump), where (HCN) is significantly lower.Toward the protostar L1157-mm, <cit.> obtain (HCN)∼0.02, one order of magnitude higher than toward the B1 shock position. The deuterium enrichment is thus more efficient toward the cold and dense envelope than toward warm regions of shocked material.Moreover, the deuteration of HCN is at least one order of magnitude smaller than the deuteration of H_2CO and CH_3OH <cit.>. Actually, HDCO and CH_2DOH are found preferentially at the interface between the shock and the ambient material <cit.>, where the evaporation/erosion of grain mantles is maximum, while DCN is more widespread because it is not limited to the impact region, suggesting that the origin of DCN is not the same as HDCO. In the following we explore and discuss the origin of DCN using a chemical model.§.§ Chemical modeling In order to investigate the origin of DCN( warm gas-phase chemistry versus surface chemistry) we used the shock model of <cit.>, which couples the time dependent gas-grain chemical model UCL_CHEM <cit.> with the parametric C-type shock model of <cit.>. This model has been successfully applied to explain the abundance of several molecular species toward the L1157-B1 shock <cit.>. In brief, the model consists of a two-phase calculation. Phase I starts from a diffuse medium (∼100 ) in neutral atomic form (apart from a fraction of atomic hydrogen already in ) that undergoes collapse to simulate the formation of a high-density clump. We adopted initial solar abundances for all species <cit.>, apart from the metals and sulfur which we deplete by a factor of 100 for consistency with previous modeling work on L1157-B1.We assumed a standard value for the cosmic ionization rate of ζ=1.3×10^-17 s^-1, although we also run a model with a cosmic ray ionization rate higher by a factor of 10, as in <cit.>, which is close to the value derived by <cit.>, ζ=3×10^-16 s^-1,based on observations of molecular ions. During this phase, atoms and molecules from the gas freeze on the dust grains and hydrogenate when possible.The sticking efficiency for all species is assumed to be 100% but the rate of depletion is a function of density (as in ). The density at the end of phase I corresponds to the pre-shock density.In Phase II we follow the chemical evolution of gas and icy mantles during the passage of a C-type shock. During this phase, both thermal desorption and sputtering of the icy mantles are included. A full description of the model can be found in <cit.> and <cit.>.Our non-deuterated gas-phase chemical network is taken from UMIST 12[http://udfa.ajmarkwick.net] <cit.>. The deuterated network is taken from the model in <cit.>.In our model the pre-shock density was set to n(H_2)=10^5  and the shock velocity is _s=40 , to be consistent with the results found in previous studies <cit.>. For this model, the maximum temperature reached during the shock passage is 4000 K. Figure <ref> presents the evolution ofthe DCN/HCN abundance ratio as a function of time during the passage of the shock for the two models differing in the cosmic ionization rate. We also show in Fig. <ref> the neutral gas temperature profile T_n (thin red line) and the temperature profile of ions T_i (dotted red line). We note that the temperature profile of neutrals and ions is the same for both models differing in the cosmic ionization rate. The trend for the two models is exactly the same, and only during the latter time steps is there a perceptible difference, with the model with higher cosmic ionization rate displaying slightly lower values of (HCN). We can see in Fig. <ref> that (HCN) does indeed vary with the passage of the shock[t=0 is when the shock starts].The sharp increase of (HCN) around t≃5 years is a consequence of the release of DCN from grain mantles due to sputtering, which occurs once the dynamical age across the C-shock reaches the saturation time t_sat≃4.6 years. Sputtering of HCN also occurs at the same saturation time. The relative increase is, however, much larger for DCN than HCN, yielding to the high values of (HCN) shown in Fig. <ref>. Sputtering of DCN from grain mantles could thus occur toward the jet impact region, which is close to the brightest clump in DCN ( towards B1a, we refer to Fig. <ref>).Later on, the deuterated fraction, (HCN), shows constant values until t≃25 years, and then drops as the temperature of the neutrals increases.When the gas cools down in the post-shock phase, there is an increase of (HCN). Our model matches the observations during the pre-shock phase and during the post-shock phase, independently from the value used for the cosmic ionization rate, indicating that the (HCN) cannot discriminate the value of the cosmic ionization rate. However, as shown in Sect. 3.2, the spectral signature of the DCN andgas corresponds to the cavity walls of L1157-B1 shock, that is, shocked material at a temperature of ∼70 K <cit.>. Therefore, we favor the solution found aroundt∼1000 years, for which the observed (HCN) matches the post-shock gas material, which, interestingly, is the dynamical timescale of the B1 shock, t_dyn≃1100 years <cit.>.In order to qualitatively understand the variation in the DCN/HCN ratio we have looked at the reactions involving the formation and destruction of DCN as a function of the passage of the shock. The increase in (HCN) coincides with a slight increase in temperature and remains high up to a temperature of ∼1200 K. As explained above, such a high ratio (∼10^-2) is a consequence of an increase in the DCN fractional abundance due to sputtering, while HCN remains at an approximately constant abundance. The reaction responsible for the plateau in (HCN) is DCNH^+ + NH_3 → NH_4^+ + DCN, which becomes more efficient due to the increase in the ammonia abundance <cit.>. During this plateau, the dominant destruction route of DCN is DCN + H → HCN + D. However, as temperature increases, the main formation route becomes the deuteration of HCN which is quickly reversed leading to another decrease of the DCN fractional abundance through the reaction DCN + H → HCN + D as well as due to reactions of molecular hydrogen with CN ( + CN → HCN + H), which efficiently form HCN and dominate the drop of (HCN). During the post-shock phase ( when the gas cools down) (HCN) increases again as DCN is efficiently formed from HCN, while HCN remains at an approximately constant abundance. §.§ Gas-phase versus grain-surface chemistryOur results indicate that the deuterated fraction, (HCN), in L1157-B1 is consistent with gas that has been shocked and cooled down to 80 K that displays the spectral signature of the outflow cavity walls of L1157-B1 bow-shock. The presence of faint and extended DCN emission in the head of the bow-shock, as revealed by the morphology of DCN emission, is consistent with the idea already pointed out by <cit.> that gas-phase chemistry is the dominant process responsible of the production of DCN. Unlike most molecules, for which the deuteration process in the gas-phase is not efficient at temperatures above 20 K, the formation of DCN is supposed to start from reactions which can be efficient up to temperatures above 70 K ( CH_3^+ + HD → CH_2D^+ ++ 654 K, ). An alternative scenario that could explain the presence of DCN in the head of the bow-shock has recently been proposed by Codella et al. (submitted) to account for the presence of NH_2CHO in L1157-B1. In this case, the head of the bow-shock corresponds to gas that was already processed at an earlier time and is characterized by a lower (HCN) such as in B1e and B1c (see Table <ref>), where we found hints of lower deuterated fraction.Moreover, additional mechanisms such as evaporation and/or sputtering, releasing mantle species in the gas phase, may also be responsible for the presence of DCN in L1157-B1 shock. This process may be specially efficient close to the jet impact regions, that is, towards B1a, the brightest DCN clump where (presumably) the jet is impacting (Podio et al. 2016, Busquet et al. in preparation). Therefore, while gas-phase chemistry contributes to the extended DCN emission associated with the head of the bow-shock, the abundance of DCN may be locally enhanced as a result of the sputtering process. The exact contribution cannot be constrained from the current data and further observations are required to support/dismiss the proposed scenarios.§ CONCLUSIONS We have presented observations of DCN (2–1) and(2–1) toward the L1157-B1 protostellar shock using the NOEMA interferometer in order to investigate the role of the different chemical processes at work in a shocked region that lead to the deuteration of HCN, and compare this with the deuteration of H_2CO. The emission of DCN is more extended than that of HDCO, and is clearly detected in the head of the bow-shock, where HDCO is not detected. While HDCO and CH_2DOH are found at the interface between the shock and the ambient material, the emission of DCN is more widespread and not limited to the shock-impact region. The spectral signature of both DCN (2–1) and(2–1) lines corresponds to the outflow cavity walls of L1157-B1, indicating that both lines originate from shocked gas.The deuterated fraction, (HCN)≃4×10^-3, is at least one order of magnitude lower than the deuteration of H_2CO and CH_3OH, whose deuterated species are formed on grain mantles and then release into the gas-phase due to the passage of the shock. Using the time-dependent gas-grain chemical model UCL_CHEM coupled with the parametric C-type shock model, adopting a pre-shock density of 10^5  and a shock velocity of40 , we confirmedthat (HCN) shows significant variations during the passage of the shock.Independently from the value used for the cosmic ionization rate, our modelmatches the observations around the dynamical age of the B1 shock, around t∼1100 years. Moreover, our model indicates that (HCN) cannot be used to discern values of the cosmic ionization rate.Therefore, the morphology of DCN together with the shock model suggest that the presence of DCN is a combination of sputtering, which could be important toward the jet impact region ( toward the B1a clump), and gas-phase chemistry producing a widespread DCN emission, and dominating especially toward the head of the bow-shock. Follow-up observations at higher angular resolution and sensitivity with NOEMA will allow us to spatially separate the contribution of the different processes at work, providing additional insight into the origin of DCN in shocked regions.We sincerely thank the referee for their helpful comments and valuable suggestions that improved this paper. The authors are grateful to the IRAM staff for their help during the calibration of the NOEMA data.G.B. acknowledges the support of the Spanish Ministerio de Economia y Competitividad (MINECO) under the grant FPDI-2013-18204. G.B. is also supported by the Spanish MINECO grant AYA2014-57369-C3-1-P. This work was supported by the CNRS program “Physique et Chimie du Milieu Interstellaire” (PCMI) and by a grant from LabeX Osug@2020 (Investissements d'avenir - ANR10LABX56).aa
http://arxiv.org/abs/1706.08834v2
{ "authors": [ "G. Busquet", "F. Fontani", "S. Viti", "C. Codella", "B. Lefloch", "M. Benedettini", "C. Ceccarelli" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170627133228", "title": "The L1157-B1 astrochemical laboratory: testing the origin of DCN" }
Institute for Quantum Information and Matter and Walter Burke Institute for Theoretical Physics, California Institute of Technology, Pasadena, CA 91125Center for Theoretical Physics and Department of Physics, University of California, Berkeley, CA 94720 Lawrence Berkeley National Laboratory, Berkeley, CA 94720Joint Center for Quantum Information and Computer Science, University of Maryland, College Park, MD 20742 National Institute of Standards and Technology, Gaithersburg, MD, 20899Joint Center for Quantum Information and Computer Science, University of Maryland, College Park, MD 20742 Departments of Computer Science and Mathematics, University of Maryland, College Park, MD 20742 Mathematics Research Group, National Security Agency, Ft. G. G. Meade, MD 20755 Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constantis a large instance of an NP-hard problem. The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 10^-120 in a randomly generated 10^9-dimensional ADK landscape. Fast Optimization Algorithms and the Cosmological Constant Brad Lackey June 23, 2017 ==========================================================§ INTRODUCTION AND SUMMARY§.§ Cosmological Constant Problem and the Landscape According to the Standard Model of particle physics, the energy density of the vacuum receives multiple contributions whose order of magnitude vastly exceeds the observed value <cit.>Λ≈ 1.5× 10^-123 M_P^4 .(Below we will use units where the Planck mass is unity, 1≡ M_P=(ħ c/G)^1/2≈ 1.2 × 10^19 GeV.) Both perturbative and nonperturbative processes contribute, such as vacuum fluctuations of all fields, and electroweak symmetry breaking.The excess is by a factor of at least 10^60 assuming a new symmetry at a TeV (so far not found). It could be as large as 10^122 with a Planck-scale cutoff. The observed small value of Λ implies that the various contributions must cancel against one another, or against further unknown contributions which must be at least as large, with a relative precision of at least 10^-60 and perhaps 10^-122. Consistency with well-established cosmological history severely constrains large classes of approaches to this problem. For example, it is not possible for the universe to dynamically select the “correct” vacuum energy at early times. Only gravity couples to the absolute energy, and gravity sees the total stress tensor. At the time of Big Bang Nucleosynthesis, characteristic energy densities were of order 10^-88. This is more than 30 orders of magnitude greater than the observed value that would have to be targeted by a putative adjustment mechanism. Attempts to desensitize General Relativity to the energy in vacuum fluctuations run into conflict with tests of the equivalence principle. These and other obstructions to non-anthropic approaches are discussed in <cit.>.In a landscape model, a small cosmological constant is selected by correlation with the location of observers. The universe can form large regions with many different possible values of Λ. This is most natural in a theory with extra dimensions, such as string theory. One finds that there are generically exponentially many ways of constructing a “vacuum”, i.e., a compactification to 3 large spatial dimensions. If the vacuum energy Λ is, say, a random number between -1 and 1, but there are N≫ 10^122 different vacua, it is likely that a small fraction but large number 10^-122 N of vacua have small enough Λ to be consistent with observation. Moreover, a great variety of vacua are naturally produced by inflationary dynamics in the early universe. In specific models, the distribution of Λ is not random. The above approach works as long as the spectrum of Λ is sufficiently dense near 0. Consistency with standard cosmological history is achieved if the potential landscape is multi-dimensional, with neighboring vacua generically having very different energies <cit.>.Typical spacetime regions would still have Λ∼ O(1), of course. But in such regions any worldline has an event horizon of order the Planck area, and so contain only a few bits of causally connected information <cit.>. Complex structures such as observers necessarily find themselves in a highly atypical region that allows for a larger cosmological horizon with area (and hence, maximum entropy) of order Λ^-1. (The origin of the particular scale 10^-122 is not explained by this qualitative argument. See <cit.> for an argument that assumes galaxies are needed, or <cit.> for a more robust argument.) §.§ Computational Complexity In 2007, Denef and Douglas brought a complexity theoretic perspective to the cosmological constant problem <cit.>. In particular, they pointed out that, in some formulations, the problem of finding a vacuum with cosmological constant compatible with observation is a large instance of an NP-hard problem. Specifically, two simplified models were considered in <cit.>: a version of the Arkani-Hamed-Dimopolous-Kachru (ADK) model <cit.>, and the Bousso-Polchinski (BP) model <cit.>. Here we focus on the ADK model, which is the more simplified of the two, as it is sufficient to capture the essential features that we wish to address.In the ADK model, the cosmological constant is obtained by summing the energy contributions from a large number of fields, each of which is subject to a double-well potential. We assume the vacuum energy contributed by either of the two minima of each field to be a random number with mean zero[This assumption differs from the model mainly studied by ADK, but it is adequate for our analysis.] and standard deviation of of order 1 in Planck units. (Thus it can be positive or negative.) Given n such fields there are correspondingly N=2^n metastable vacua, specified by an n-bit string f(j) ∈{0,1}, j=1,…, n. The cosmological constant in any vacuum is given byΛ[f(j)] = ∑_j=1^n E_f(j)^(j)where E_0^(j) and E_1^(j) are the two possible vacuum energies contributed by the j^th field.If our universe were described by this model, then with appropriate technology, there would be no obstruction in principle to measuring each of the n fields directly, and thus determining which of its two vacua it occupies. This requires only n measurements. Thus, we can in principle identify which vacuum we live in, among all the vacua in the ADK model. A similar argument applies to the BP model: given good enough technology, one would simply measure the fluxes on topological cycles in the extra dimensions. We could probe each field experimentally and read off the bit string f(j). Denef and Douglas consider a different task: suppose we are given only the total value of the cosmological constant ∼ 10^-122 (for example from observation), but not the vacuum configuration f(j) of the n fields. We wish to identify a vacuum in the ADK model compatible with this value. Then we would have to sift through the 2^n allowed vacua to find a combination of positive and negative numbers, each of order 1, that add up to 10^-122. Such combinations clearly constitute a small fraction of all the 2^n vacua. However, in simple statistical models, e.g. where E_0^(1),E_1^(1),…,E_0^(n),E_1^(n) are each independently drawn uniformly at random from [-1,1], such combinations will exist with high probability provided √(n)2^-n≲ 10^-122 <cit.>, i.e. n ≳ 407. Furthermore, for n larger than this, the number of vacua with Λ≤ 10^-122 will be roughly 10^-122× 2^n/√(n) <cit.>.In <cit.> it was pointed out that the problem of finding such vacua in the ADK model is a variant of the number partitioning problem, which is NP-complete. Consequently, under the widely-held complexity-theoretic assumption that P ≠ NP, no classical algorithm can solve worst-case instances of this problem in time scaling polynomially with n. Furthermore, under the stronger but also widely-held assumption that NP ⊈ BQP, no quantum algorithm can solve worst-case instances of this problem in polynomial time either. The physical significance of the Denef-Douglas observation is not immediately clear.Here, we posit that its significance lies in the contrast between the NP-complete hardness of finding a vacuum with small Λ by studying the theory, on the one hand; and on the other hand, the ease with with we can read off a solution to this problem (our own vacuum), by measuring the n bits directly as discussed above. This implies that we get to read off the answer to an instance of an NP-hard problem that Nature has already solved for us. And we get to do this for anthropic reasons: complex structures exist only in regions with Λ≪ 1. Our mere status as observers gives us immediate access to the solution of a hard problem. How is this possible? It is instructive to consider the cosmological dynamics that had to solve the “hard” problem and produce the small-Λ region we occupy. There are two valid and largely equivalent <cit.> viewpoints, global and local. In the global viewpoint, the universe is exponentially expanding and constantly producing new regions. In this case gravity supplies exponential resources for solving the hard problem. No-one can observe the whole universe, because regions are shielded from one another by event horizons. But observers necessarily find themselves in the regions where the problem has been solved.In the local viewpoint, one considers the different decay chains through the landscape that might be realized in a single causally connected region (causal patch). The patch decoheres rapidly every time a vacuum transition takes place. This trades the multiverse for “many worlds” <cit.>. Observers find themselves in a branch of the decay chain that produced a vacuum with small Λ. The situation is comparable to solving a hard problem by sitting down in front of a robot that points a gun at you. The robot takes one random guess (generated by some quantum measurement) and secretly checks it in polynomial time. If the guess solves the problem, the robot tells you the solution, but if it fails, it shoots you. Necessarily, if you survive, you will have gained the solution very quickly[This method of solving NP-complete problems seems to have been first proposed in <cit.>; see also <cit.>.].We do not claim that from either of those viewpoints, our easy access to a solution of a hard problem constitutes a logical contradiction. Yet, the ability to utilize exponential unobservable resources, or an exponentially large branching tree of decoherent histories would be a surprising and perhaps troubling circumstance. Therefore, in this paper, we will posit a Computational Censorship Hypothesis: by physical measurements we must not be accessing the solution to a hard problem, i.e., a problem so hard that it could not have been solved by the physical resources in the observable universe.By “resources,” we mean the number of elementary gates in a computation. There is some ambiguity how to quantify an upper bound on this for the observable universe. Possible candidates include (in natural units) the Einstein-Hilbert-matter action <cit.>; the energy of the universe times its age <cit.>; the maximum entropy of the visible universe <cit.> or of any universe with the observed value of Λ <cit.> (which is given by the horizon area of empty de Sitter space <cit.>); or lastly the amount of entropy that has been produced in our past light-cone. All but one of these definitions give a number of gates of order Λ^-1∼ 10^122 for our universe in the present era. (The final definition gives a somewhat lower answer <cit.> if event horizons are not included.)Thus, for the purposes of this paper, we will take the available resources to be:R_ max∼Λ^-1quantum gates. (Whereas this estimate takes an elementary quantum gate to be the notion of computational step relevant to our universe, other more speculative possibilities have been considered elsewhere <cit.>.)We note that making the Computational Censorship Hypothesis precise is a difficult problem that we don't claim to have solved. The central difficulty is that our universe provides us with the solution to one instance of a hard problem, whereas computational complexity is defined only for asymptotic families of instances. For any instance of a problem there always exists an efficient algorithm which has the solution to that instance hardwired in. (We thank S. Aaronson for stressing this point to us.) In an intuitive sense, it is clear that the existence of such algorithms is not of interest in determining the difficulty of the instance. Instead we take the complexity of the instance to be the number of steps required by the most efficient general-purpose algorithm that solves it. The distinction between general-purpose algorithms and ones with answers hard-wired seems difficult to formalize, but is typically easy to make in practice. In the remainder of this paper we will describe various general-purpose number partitioning algorithms that set upper bounds on the complexity of number partitioning problems. Different algorithms provide the best upper bound in different parameter regimes. In all regimes we find that the complexity of the cosmological constant problem within the ADK model is well within the computational capacity of the observable universe and therefore, contrary to initial appearances based on brute force search, it does not pose a challenge to the Computational Censorship Hypothesis. In some regimes the speedup over brute search achieved by more sophisticated algorithms is quite dramatic; for instances in which the ADK model has 10^9 fields we are able to find a cosmological constant of order 10^-120 in a few hours on a single processor.Note that the Computational Censorship Hypothesis is quite minimal. We require only that some algorithm exists that can solve the problem (e.g., identify a suitable vacuum) in 10^122 steps or less. We do not require that this algorithm bear any relation to the (largely known) cosmological dynamics that would have produced our universe. By contrast, recent work of Denef, Douglas, Greene, and Zukowski explores computational complexity as a possible restriction on the dynamics <cit.>. A related but distinct principle was proposed by Aaronson <cit.>, that NP-complete problems should not be solvable with polynomial resources by any physical means. Recent applications of this and related principles include <cit.>. §.§ An Apparent Paradox and Its Resolution Imposing the Computational Censorship Hypothesis leads to an apparent paradox in light of the Denef-Douglas result. To see this, we must quantify the hard problem and show that it requires resources larger than R_ max∼Λ^-1. Indeed, as shown in section <ref>, the number of elementary computational steps (quantum gates) required to find a solution with Λ∼ 10^-122 by brute force search of the landscape scales as R_ brute∼Λ^-1( log_2 Λ^-1)^3/2 ,which is asymptotically larger than the computational capacity Λ^-1 in the limit of small Λ. For the particular value of Λ∼ 10^-122,Λ^-1( log_2 Λ^-1)^3/2 exceeds Λ^-1 by several orders of magnitude.If the complexity of brute force search were the correct measure of the complexity of the number partitioning problem, then by measuring which vacuum we are in (which is in principle possible, as argued above) we would obtain the solution to an instance of a computational problem which could not be solved within our observable universe, in violation of the Computational Censorship Hypothesis. Furthermore, this violation does not necessarily require any measurements beyond present-day capabilities. The decision version of the number partitioning problem, of determining whether a solution with residue smaller than a give threshold exists, is already NP-hard, even without demanding that the explicit solution be produced. Thus, if we knew the specifics of the problem instance (E_0^(1),E_1^(1),…,E_0^(n),E_1^(n)), then the astronomical observations that have already been made, indicating that Λ≃ 10^-122 already tells us that a residue of that magnitude exists among the solutions to this instance of number partitioning, thereby learning the solution to a large instance of an NP-hard problem.In the remainder of the paper we will examine how this apparent paradox can be resolved. Our key observation is that modern algorithms can solve the number partitioning problem using far fewer computational steps than are required by brute-force search. The fastest known classical algorithm for general instances of the number partitioning problem runs in R∼ O(2^0.291n) time <cit.> and the fastest known quantum algorithm runs in R∼ O(2^0.241n) time <cit.>. For n ≲ 1300 these algorithms place the instance of number partitioning arising in the ADK model within the estimated computational capacity of the observable universe, but far outside the capacity of even the largest supercomputers. Interestingly, for very large n, the problem becomes solvable with high probability by the Karmarkar-Karp heuristic, which runs in polynomial time,R_ KK∼ nlog n ,provided that the number of numbers is sufficiently large,n ≳exp[√(log B/c) ] ,   c≈ 0.7 ,where B is the typical magnitude of the numbers. In the application to the ADK model,B∼Λ^-1≈ 10^122 .By exploiting the Karmarkar-Karp algorithm, we show in sec:experiments that vacua with Λ∼ 10^-120 can in fact be found in the ADK model in under 3 hours on a standard workstation, provided n ≳ 10^9 . While the worst-case remains NP-hard, Monte Carlo generated average cases can be solved in polynomial time, provided the number of fields is sufficiently large. In this work we have focused on the ADK model of the landscape which leads to number partitioning as the underlying computational problem. Karmarkar-Karp is a powerful algorithm against this problem, but it does not generalize to more complex models easily. It will be interesting to investigate the constraints imposed by the Computational Censorship Hypothesis on other toy models, such as the lattice model of BP which is not amenable to a Karmarkar-Karp style algorithm. Eventually one would hope to consider a concrete landscape arising from a complete theory, which would dictate both the structure of the partitioning problem and the statistical distribution of the input. For example the full string landscape <cit.>, when its structure becomes better understood, should provide data analogous to the concrete distribution of charges in the BP model.Our results show that landscape models remain a viable approach to the cosmological constant problem even if the Computational Censorship Hypothesis is adopted. But for now, at least, we cannot confront the hypothesis specifically with the landscape of string theory, for three main reasons. First, the ADK model is purely a toy model; we know of no evidence that it arises from string theory. Second, the string landscape is understood only in a few corners of the theory, where small parameters are available and statistical estimates are arguably under control. In particular, the oft-quoted number 10^500 of vacua is likely an underestimate <cit.>, and we do not know of a reliable upper bound. Third, even if we did know the structure of the landscape, and supposing that we knew of no general purpose algorithm that satisfied the Cosmic Censorship Hypothesis, this would not imply that no such algorithm exists.*Outline. In section <ref> we relate the ADK model to number partitioning and estimate the brute force cost of finding a small value of Λ. In section <ref> we review the Karmarkar-Karp and other fast algorithms and discuss their range of applicability. In section <ref> we report an empirical test of the Karmarkar-Karp algorithm. We demonstrate that it can find a value of Λ consistent with observation in randomly generated instances of an ADK model with nearly 10^9 fields (and so by Eq. <ref>, in a few hours on a desktop computer). We find that sieves are less efficient but still suffice to demonstrate consistency with the Computational Censorship Hypothesis.§ COMPLEXITY OF THE ADK MODELIn this section, we show that the problem of finding a small cosmological constant Λ in the ADK model can be reduced to the standard number partitioning problem. We then demonstrate that the cost of a brute force search exceeds Λ^-1 by a factor (log_2 Λ^-1)^3/2. Therefore a brute force search is incompatible with the Computational Censorship Hypothesis. §.§ Reduction to Number Partitioning The number partitioning problem is, given a list of positive integers δ_1,…,δ_n to find∑_j=1^n s_j δ_j = 0where s_j ∈{+1,-1}. The number partitioning problem is NP-complete[Technically, NP is a class of decision problems. The NP-complete version of the partitioning problem is to decide whether a solution to def exists. However, by standard arguments <cit.>, the decision and search versions of the problem are essentially equivalent; the complexity of finding a solution exceeds the complexity of deciding whether one exists by at most a factor of n.] and in fact was a member of the list of 21 problems shown to be NP-complete in the 1972 paper of Karp <cit.>, which together with Cook's 1971 paper <cit.> is credited with founding the theory of NP-completeness.The problem of finding vacua in the ADK model with cosmological constant 10^-122 differs superficially from the number partitioning problem in its standard form, but can easily be converted. To do so, first note that we can choose our labels so that for each j, E_1^(j)≥ E_0^(j). Then, for each j =1,… n letδ_j = (E_1^(j)-E_0^(j))/2μ_j = (E_1^(j)+E_0^(j))/2.In this notation, def becomesΛ = δ_0 + ∑_j=1^n s_j δ_jwhereδ_0 = ∑_j=1^n μ_j.It is clear that finding a solution to lameq is very closely related to the number partitioning problem. There are three technical differences. First, the numbers involved are reals rather than integers. This is inconsequential, as reals can be scaled up and rounded to integers, with the scale factor determined by the needed level of precision. Henceforth, we will refer to both the problem of obtaining residue Λ starting with real inputs of order 1 and the problem of obtaining residue 1 starting with integers of order Λ^-1 as number partitioning, as will be clear from context.A second difference is that in many works on integer partitioning, one wishes to find a partition in which the residue is zero, rather than merely small. Third, in the problem arising from the ADK model, there is no variable s_0 ∈{-1,+1} multiplying δ_0. Nevertheless, algorithms that were designed for solving the standard number partitioning problem can be easily adapted to this slight variant of the problem, as we now illustrate. §.§ Cost of Brute Force SearchConsider the number partitioning problem on real numbers, where problem instances are generated by drawing n numbers independently at random from the uniform distribution on [0,1]. In <cit.> it was proven that the median optimal residue is Θ( √(n) 2^-n ). (The big-Θ notation indicates that the asymptotic scaling as n →∞ is √(n) 2^-n up to constant factors.) Thus, for a solution with residue Λ to exist, one needs √(n) 2^-n≲Λ. One can show that asymptotically, this means the minimum viable value of n scales asn ∼log_2 Λ^-1 + 1/2log_2 log_2 Λ^-1.To find a residue of size Λ one needs to perform all arithmetic with at leastb ∼log_2 Λ^-1bits of precision.A naive method for brute force search would be to increment through all 2^n possible choices of sign s_1,…,s_n ∈{+1,-1} and for each one, compute the corresponding sum, and compare it against the threshold for sufficient smallness (e.g. 10^-122). Such an algorithm would perform n 2^n addition (or subtraction) operations, each on b bits. Addition or subtraction of a pair of b-bit numbers can be done by a quantum circuit of O(b) elementary gates <cit.>. Thus the total complexity of this algorithm is O(nb 2^n).However, there is a somewhat more efficient algorithm that still arguably qualifies as brute force search. Rather than summing up the residue from scratch with each new choice of signs, one could use the residue from the previous calculation and add or subtract 2δ_j for each j in which the sign has changed. For any n there always exists an ordering of the 2^n bit strings of length n such that each bit string is obtained from the previous one by only flipping a single bit. These orderings are called Gray codes, and they can furthermore be generated by efficient classical algorithms <cit.>. By ordering the choices of sign according to a Gray code one thus has to do n additions on the first step, and only one addition or subtraction on each of the subsequent 2^n -1 steps. This brings the total complexity of the algorithm down to O(b 2^n) elementary quantum gates. By nsim and bsim this yields a total complexity of order Λ^-1(log_2 Λ^-1)^3/2. § ALGORITHMS FOR NUMBER PARTITIONINGIn this section, we discuss efficient algorithms for the number partitioning problem.The number partitioning problem is NP-complete. Assuming P ≠ NP this implies that no polynomial-time classical algorithm can solve all instances of number partitioning in time scaling polynomially in n. However, this does not forbid the existence of parameter regimes in which classical algorithms can solve the problem in polynomial time. In fact, for many NP-complete problems, including the canonical example of 3-SAT, randomly generated instances are efficiently solvable generically; exponentially hard instances require fine-tuning <cit.>.Random instances of number partitioning have been well studied using methods of statistical mechanics. The standard ensemble of instances most typically studied is to set some magnitude parameter B and then choose n integers δ_1,…,δ_n independently uniformly at random from the range {1,2,…, B}. If ∑_j=1^n δ_j ≡ 1mod2 then any sum of the form ∑_j=1^n ±δ_j will be odd, and it is impossible for a solution to perfect to exist. Thus, it is conventional to define a perfect partition as a solution to perfect in the case that ∑_j=1^n δ_j is even, and as a solution to ∑_j=1^n s_j δ_j = 1 in the case that ∑_j=1^n δ_j is odd. Whether a perfect partition exists for an instance of number partitioning sampled from the standard ensemble depends on the relationship between n and B. If n is too small relative to B then the system is overconstrained and is likely to have no perfect partitions, whereas if n is sufficiently large relative to B then the system is underconstrained and is likely to have many perfect partitions. More precisely, as shown in <cit.>, in the limit of large n, randomly generated number partitioning problems will have no perfect partitions for B > 2^n + O(log n) and will have exponentially many partitions for B < 2^n + O(log n). As is the case for many NP-complete problems, the number partitioning problem becomes easier for instances sufficiently far from the phase transition. For example, the Karmarkar-Karp algorithm solves number partitioning in time O(n log n) for B < n^c log n, which is to say when n > exp[ √(log B/c)] for some constant c. It was proven rigorously in <cit.> that c = 1/2 log 2 = 0.721… suffices. In sec:experiments we empirically achieve success with c = 0.662, which is in rough agreement with the empirical testing in <cit.>. Nonetheless, the statistical mechanics arguments in <cit.> suggest that c = 0.721 is the true asymptotic value as n →∞. §.§ The Karmarkar-Karp Algorithm The Karmarkar-Karp algorithm is based on the intuition that the largest numbers should be given opposite sign in order to achieve cancellation. The Karmarkar-Karp strategy is to commit to giving the largest two numbers opposite signs without specifying which should be positive and which should be negative. This reduces the problem to a new instance of integer partitioning with one fewer number: the largest two numbers have been replaced by their difference. This is then treated in the same manner, until only one number is left, which is the final residue ∑_i=1^n s_i δ_i. An example is given in figure <ref>. The initial sorting step has complexity O(n log n) by standard algorithms. Inserting a number into the correct location in an ordered list can be achieved with complexity O(log n) using a standard data structure called a heap <cit.>. There are exactly n-1 differencing-and-insertion steps needed to arrive at a final residue. Thus the total complexity of the algorithm is O(n log n).The Karmarkar-Karp algorithm is heuristic in the sense that for some problem instances for which a perfect partition exists, the Karmarkar-Karp algorithm will fail to find it. On the other hand, as mentioned earlier, for random instances of integer partitioning with B < n^0.721 log n, the Karmarkar-Karp algorithm will succeed with probability going to 1 as n →∞ <cit.>. Korf <cit.> has introduced an extension of the Karmarkar-Karp algorithm, which initially proceeds identically to the Karmarkar-Karp algorithm and terminates if this yields a perfect partition. However, if it fails to find a perfect partition it continues searching by backtracking and trying assignments in which the largest two numbers are given the same sign. The details of Korf's algorithm are such that it is guaranteed to find a perfect partition provided one exists. For B < n^c log n Korf's algorithm matches the performance of the Karmarkar-Karp algorithm, but for B ≫ n^c log n it may have exponentially long runtime.Other heuristic algorithms derived from Karmarkar-Karp were studied in <cit.>, where it was empirically found that, in the regime where Karmarkar-Karp finds a residue much larger than the optimal residue, modest improvement in residue size can be obtained by exhaustively or stochastically searching solutions "nearby" to the Karmarkar-Karp solution, if the notion of nearness is carefully chosen. However, other than near the Karmarkar-Karp solution, the optimization landscape in number partitioning problems was found to be hard to distinguish from random, based on any of the neighborhood notions that were investigated. Thus there appears to be little structure in the problem for general-purpose optimization heuristics such as simulated annealing or genetic algorithms to exploit. This is corroborated by the relatively modest performance improvements obtained by such heuristics on number partitioning in other studies <cit.>.In analyzing the performance of the Karmarkar-Karp algorithm it is standard to consider the ensemble of instances where the δ_1,…,δ_n are independent, identically distributed random variables, typically sampled from a uniform distribution on some range 0 to B. The instances of number partitioning arising in the context of the ADK model may slightly differ from this. In particular, from equations deltaj through delta0, one sees that if E_1,…,E_n are each of order B, then δ_1,…,δ_n will be of order B, but δ_0 will generically be of order √(n)B. It is easy to see that this makes only a small difference to the performance of the Karmarkar-Karp algorithm. The first ∼√(n) differencing steps will all be used to difference from δ_0. After that, one is left with a standard instance of integer partitioning in which all the numbers are of similar magnitude, and the Karmarkar-Karp algorithm performs as it would on the standard ensemble. Thus, whereas for the standard ensemble, one would have required a minimum of n_min^std≃exp[ √(logΛ^-1/c)], the minimum number of fields in the ADK case may be slightly larger: n_min^ADK≃ n_min^std + √(n_min^std).In sec:experiments we give the results of some computer experiments on the performance of the Karmarkar-Karp algorithm confirming the predictions of the statistical analyses referenced above, and giving a quantitative sense of the practical performance of the algorithm. For simplicity, and to facilitate comparison with the existing literature, the experiments in sec:experiments are performed using a standard ensemble of instances of number partitioning. §.§ Dynamic programming The computational difficulty of the number partitioning problem depends on the number of numbers n, and their magnitudes. In the regime where the B = max_j δ_j is only polynomially large, i.e. the number of bits needed to represent the numbers scales only as some power of log n, the number partitioning problem can be solved in polynomial time on classical computers using a standard technique called dynamic programming. Specifically, as is described nicely in 4.2 of <cit.>, dynamic programming solves the number partitioning problem in time O(n D) where D = ∑_j=1^n δ_j. Problems such as number partitioning that can be solved in polynomial time when all the input numbers are restricted to polynomial magnitude (rather than allowing them to be polynomially many bits long) are said to be pseudo-polynomial <cit.>. §.§ Adapting algorithms for subset sum Number partitioning, subset sum, and knapsack problems are all variants of essentially the same problem. Algorithms for one are often applicable, with minor modification, to the others. For example a straightforward meet-in-the-middle tree search <cit.> applies to all these problems and succeeds in finding the optimal residue in time ≈ 2^0.5n. At present, the asymptotically best upper bound on the classical complexity of finding the optimal solution to number partitioning problems is given by the algorithm of <cit.>, which is guaranteed to succeed in time O(2^0.291n). The asymptotically best upper bound on the quantum complexity of this problem given by the quantum algorithm of <cit.>, which is guaranteed to find the optimum using a number of elementary steps (quantum gates) at most O(2^0.241n). (This quantum algorithm is based on quantum walks. An adiabatic quantum algorithm for this problem has also been analyzed, but its runtime is not known. Numerical calculations in <cit.> suggest a runtime scaling as 2^0.8n. The adiabatic algorithm may also be limited in its capacity to accommodate large B.)As discussed in section <ref>, the minimum value of n such that the number partitioning problem is likely to have a solution of order Λ is asymptotically log_2 Λ^-1 + 1/2log_2 log_2 Λ^-1. The algorithm of <cit.> could solve a problem of this size with runtime of order (Λ^-1)^0.291( log_2 Λ^-1)^0.146.§.§ Adapting lattice sieves Here we explore a very simple sieve mechanism for solving the number partitioning problem inspired by “lattice sieves” <cit.>. The Karmarkar-Karp algorithm can be viewed as a form of the Gauss sieve <cit.> for a 1-dimensional lattice. Curiously, while more sophisticated lattice sieves easily outperform the Gauss sieve on high dimensional lattices <cit.>, here we find this is seemingly not the case for the number partitioning problem. The simple sieve we present here is similar in spirit to the “tuple sieve” of <cit.>, but cannot match the performance of the Karmarkar-Karp algorithm as we will show.Nonetheless, the key advantage of this style of sieve is that it is not restricted to the number partitioning problem and so could be easily adapted to other models of the landscape.In general, a sieve consists of several stages. For us, the input to a stage is a collection of numbers; these are partitioned into small blocks of size b and on each of these blocks the number partition problem is solved for the optimal residue. This collection of residues is the output of the sieve stage, which then becomes the input for the next stage. There are number of algorithms to solve for the optimal residue, some of which are illustrated in the previous sections. All of these take work 2^α b + o(b). As long as the distribution of the input data is sufficiently well behaved, the optimal residues will be exponentially distributed with expected size 2^-b + o(b), asymptotically O(√(b)2^-b) <cit.>. In figure <ref>, we validate this scaling for small b but recover a smaller power in the polynomial factor in this formula. In figure <ref>, we also validate that the distribution of the residues is well-modeled as exponential with the parameter λ estimated from the data.If our input is n fields producing mean energy differences δ_j ≈ 1, the first sieve stage involves solving n/b_1 number partition problems, each of size b_1. The work for this stage is ≈n/b_1 2^α b_1 and the output is n/b_1 residues exponentially distributed with mean size ≈ 2^-b_1. The second sieve stage partitions these into blocks of size b_2 and solves solves the number partition problem on each to produce n/b_1b_2 residues of size ≈ 2^-(b_1+b_2). And so on.The goal is that after k sieve stages we produce a single residue of expected length 2^-t≈ 2^-(b_1 + ⋯ + b_k). The optimal work is given when we follow an “equipartition principle” and balance the amount of work done on each sieve stage. For example, the first sieve stage involves solving many more number partition problems than the second stage, and so we should choose b_2 > b_1 so as to balance the amount of work done during the first stage with that done in the second. Specifically, in stage j ≤ k of the sieve, we solve n/(b_1⋯ b_j) number partition problems with an overall work of n/(b_1⋯ b_j)2^α b_j, which we balance with the work in stage j-1:n/b_1⋯ b_j 2^α b_j≈n/b_1⋯ b_j-1 2^α b_j-1.Therefore we select b_j implicitly by solvingb_j - 1αlog_2(b_j) ≈ b_j-1.The overall work of the sieve is then ∼kn/b_12^α b_1. Examples of sieves for k=2, …, 8 stages, α = 0.5, all targeting residues of size ≈ 2^-400, is given in table <ref>.This table indicates that the only sieves with k=2,3,4 can outperform Karmarkar-Karp in terms of the number of fields, which requires n ≈ 8× 10^8 to produce residues of size ≈ 2^-400. At this size Karmarkar-Karp takes work roughly 2^35, well below that of any of these sieves. To outperform Karmarkar-Karp with this style of sieve, the algorithm that solves number partitioning on the blocks would need to have α≲ 0.22, and even then lower-order terms not counted in the asymptotic expression would likely dominate the work.§ COMPUTER EXPERIMENTSIn this section, we apply fast algorithms to the problem of finding a small cosmological constant in an ADK landscape. We show that they allow the Computational Censorship Hypothesis to be satisfied. §.§ Karmarkar-Karp To empirically test the Karmarkar-Karp algorithm in a regime relevant to the cosmological constant problem, we generated random instances of the number partitioning problem, at various values of n in which each of the n numbers are independently sampled uniformly from {0,1,2,…,2^430-1}. In figure <ref>, we plot the fraction of instances on which the Karmarkar-Karp algorithm was successful with n numbers, where we defined success as achieving residue less than 2^30. In the context of finding small cosmological constant within the ADK model, one starts with real numbers of order 1, and seeks to find a residue of order 10^-122. Here we have scaled up the numbers by a factor of 2^430 and represented them as integers. This use of fixed-point arithmetic is strictly for computational convenience. Our definition of success corresponds to achieving a residue which is smaller than the magnitude of the initial numbers by a factor of 2^400≃ 10^120 and thus corresponds to finding a cosmological constant close to that observed for our universe[A more precise match to our universe would be to seek a factor of 2^406, but this was not convenient to work with because it put the memory requirements of the algorithm just slightly beyond the available 128G of RAM on most of our computers. Achieving a factor 2^406 requires n ≃ 8.7 × 10^8 and correspondingly an increase in time and memory cost of less than 20%.]. The extra 30 bits of precision are to ensure that “numerical noise” should be small.By the analysis of <cit.>, if the Karmarkar-Karp algorithm is applied to real numbers uniformly distributed on [0,1], the size of the final residue should be exponentially distributed. That is, the probability that the residue lies between y and y + dy should be λ e^-λ ydy, whereλ=e^-c log^2 n and c asymptotically equal to 1/√(2) as n →∞. Empirical studies at finite n consistently observe values of c smaller than 1/√(2) <cit.>. By defining success to be a reduction factor of ϵ = 2^-400, we should obtain success probabilityP =∫_0^ϵλ e^- λ y dy= 1-exp[ -e^-c log^2 nϵ].As one can see from figure <ref>, the observed success fraction from our trials of the Karmarkar-Karp algorithm on random instances agrees well with this prediction if we take c = 0.6615. §.§ Sieves The predicted work of a sieve to produce a residue of length 2^-400 is not so large that the universe would be unable to compute it, but it is large enough to require significant effort with current hardware. As a simple proof of concept, we will tackle a scaled down version with four sieve stages of block sizes (b_1,b_2,b_3,b_4) = (20, 30, 40, 50), and use a simple meet-in-the-middle algorithm (α = 0.5) to solve the number partitioning problem <cit.>. The profile of this experiment is as follows, which predicts an expected size of the final residue output at sieve stage four to be 𝔼[s] = 2^-121.3.Stageb Inputs Distribution #NPPs Work 𝔼[s] One20 1200000 Uniform 60000 2^25.9 2^-16.1Two30 60000 Exponential 2000 2^26.0 2^-41.3Three40 2000 Exponential 50 2^25.6 2^-76.4Four 50 50 Exponential 1 2^25.0 2^-121.3 The result of the experiment is captured in figure <ref>.§ ACKNOWLEDGEMENTSWe would like to thank Scott Aaronson, Adam Bouland, and Liam McAllister for discussions. N.B. is supported in part by the DuBridge Fellowship of the Walter Burke Institute for Theoretical Physics. R.B. is supported in part by the Berkeley Center for Theoretical Physics, by the National Science Foundation (award numbers PHY-1521446, PHY-1316783), by FQXi, and by the US Department of Energy under contract DE-AC02-05CH11231. S.J. and B.L. thank U. Maryland for use of the Deepthought2 high performance computing cluster. Parts of this manuscript are a contribution of NIST, an agency of the US government, and are not subject to US copyright.
http://arxiv.org/abs/1706.08503v1
{ "authors": [ "Ning Bao", "Raphael Bousso", "Stephen Jordan", "Brad Lackey" ], "categories": [ "hep-th", "gr-qc", "quant-ph" ], "primary_category": "hep-th", "published": "20170626175110", "title": "Fast optimization algorithms and the cosmological constant" }
A Cryptographic Approach for Steganography Jacques M. Bahi, Christophe Guyeux, and Pierre-Cyrille Heam*FEMTO-ST Institute, UMR 6174 CNRSComputer Science Laboratory DISCUniversity of Franche-Comté, France{jacques.bahi, christophe.guyeux, pierre-cyrille.heam}@femto-st.fr *Authors are cited in alphabetic orderDecember 30, 2023 ============================================================================================================================================================================================================================================================================================^[1][2] Université de LyonUMR 5822, CNRS/IN2P3, Institut de Physique Nucléaire de Lyon4 rue Enrico Fermi, 69622 Villeurbanne Cedex, FRANCE^[2] Fields, Gravity & Strings, CTPUInstitute for Basic Sciences, Daejeon 34047 KOREA We consider BPS-counting functions 𝒵_N,M of N parallel M5-branes probing a transverse ℤ_M orbifold geometry. These brane web configurations can be dualised into a class of toric non-compact Calabi-Yau threefolds which have the structure of an elliptic fibration over (affine) A_N-1. We make this symmetry of 𝒵_N,M manifest in particular regions of the parameter space of the setup: we argue that for specific choices of the deformation parameters, the supercharges of the system acquire particular holonomy charges which lead to infinitely many cancellations among states contributing to the partition function. The resulting (simplified) 𝒵_N,M can be written as a sum over weights forming a single irreducible representation of the Lie algebra 𝔞_N-1 (or its affine counterpart). We show this behaviour explicitly for an extensive list of examples for specific values of (N,M) and generalise the arising pattern for generic configurations. Finally, for a particular compact M5-brane setup we use this form of the partition function to make the duality N↔ M manifest. § INTRODUCTIONSix dimensional superconformal field theories (SCFTs) along with their compactifications to lower dimensions have attracted a lot of attention in recent years: on the one hand, the dynamics of these theories display very rich structures which are interesting to explore in their own right. On the other hand, these SCFTs have seen numerous applications in string- and field theories. Indeed, the fact that many of them can be engineered from string or M-theory through various brane constructions (see for example <cit.> for recent work on theories constructed from parallel M5-branes (with M2-branes stretched between them)), has allowed to identify interesting structures in the latter and has provided an invaluable window into their inner workings <cit.>. Similarly, from the point of view of field theory, the recent years have brought to light interesting new dualities: for example, different types of compactifications of six dimensional SCFTs lead to various lower dimensional theories. The connection to a common higher dimensional parent theory gives rise to relations between certain quantities computed in these theories. The first example of this phenomenon was discussed in <cit.>, relating the partition functions of four dimensional gauge theories to conformal blocks in Liouville theory. Since then, multiple other examples of this type have been found.Describing these SCFT's, however, using traditional tools in field theory, is typically rather difficult, since in general no Lagrangian description is known. Therefore, different methods – many of them inspired by their relation to string-theory – have been developped. In particular, considering compactifications of F-theory <cit.> on elliptically fibered Calabi-Yau threefolds, a classification <cit.> (see also <cit.> for recent work in this direction) of six-dimensional SCFTs has been proposed. Those theories with 𝒩=(2,0) supersymmetry allow an ADE classification and can be realised within type II string theory compactified on a ℝ^4/Γ singularity, with Γ a discrete ADE subgroup of SU(2). In the case of an A-type orbifold (i.e. Γ=ℤ_N) these theories have a dual description in terms of N parallel M5-branes probing a transverse ℝ^4_⊥ space. In this paper we study the A_N symmetry in a series of mass-deformed theories that are described by N parallel M5-branes (separated along ℝ or 𝕊^1) that probe a transverse ℝ_⊥/ℤ_M singularity. The BPS partition functions 𝒵_N,M of this system have been computed explicitly in <cit.> for M=1 and in <cit.> for generic M∈ℤ. There are various techniques to obtain 𝒵_N,M, which exploit different dual descriptions of the M-brane setup:* For general N,M one can associate a toric Calabi-Yau threefold[In the case that the M5-branes are separated along ℝ (called the non-compact setup in this work), the Calabi-Yau is an elliptic fibration over A_N-1 while in the case that ℝ is compactified to 𝕊^1 (called the compact brane setup in this work), the latter is replaced by affine A_N-1.] X_N,M to the M-brane setup whose topological string partition function captures 𝒵_N,M.* The M-brane setup is dual to a (p,q) 5-brane web in type II string theory <cit.>. The Nekrasov partition function on the world-volume of the D5-branes corresponds to 𝒵_N,M.* Considering BPS M2-branes stretched between the M5-branes, the intersection of the two has been dubbed M-string in <cit.>. The partition function of the latter is computed by a 𝒩=(0,2) sigma model, whose elliptic genus was shown in <cit.> to capture 𝒵_N,M.Besides the mass parameter m, the partition function 𝒵_N,M needs to be regularised by the introduction of two deformation parameters ϵ_1,2, which (from the perspective of the dual gauge theory) correspond to the introduction of the Ω-background <cit.>. For generic values of m,ϵ_1,2, the M-string world-sheet theory is described by a sigma model with 𝒩=(2,0) supersymmetry. However, it was remarked in<cit.> that m=±ϵ_1-ϵ_22 the supersymmetry is enhanced to 𝒩=(2,2) leading to 𝒵_N,M(m=±ϵ_1-ϵ_22)=1 (after a suitable normalisation).In this paper we generalise this observation to make the A_N-1 (or affine A_N-1) symmetry of the partition function 𝒵_N,M manifest and organise it according to irreducible (integrable) representations of the associated Lie algebra 𝔞_N-1 (or affine 𝔞_N-1) for certain choices of the deformation parameters: for simplicity, we consider the unrefined partition functions (i.e. we choose ϵ_1=-ϵ_2=ϵ) and consider the case m=nϵ with n∈ℕ. While the former enhances the supersymmetry to 𝒩=(0,4), the latter choice does not change the superconformal algebra on the M-string world-sheet. However, nevertheless, the partition function 𝒵_N,M(m=nϵ,ϵ) simplifies dramatically due to the fact that the corresponding supercharges obtain a non-trivial holonomy structure. This allows for infinitely many cancellations of different BPS-states contributing to the partition function, thus dramatically simplifying 𝒵_N,M(m=nϵ,ϵ): indeed, by studying a series of examples, we show that in the case of a non-compact brane configuration (i.e. in the cases where the M5-branes are separated along non-compact ℝ), the partition function becomes a polynomial of order Mn^2 in Q_f_a=e^-t_f_a, where t_f_a is the distance between the M5-branes (in suitable units). Similarly, also the partition functions of the compact brane configurations simplify (although their dependence on the Q_f_a remains non-polynomial).Moreover, since the choice m=nϵ is fully compatible with all symmetries of the elliptic fibration X_N,M, notably A_N-1 (or affine A_N-1 in the case of a compact brane configuration), the latter are manifestly visible in 𝒵_N,M(m=nϵ,ϵ). Indeed, from the perspective of the Calabi-Yau manifold X_N,M the t_f_a can be written as integrals of the Kähler form over a set of ℙ^1's that can be identified with the simple positive roots of the Lie algebra 𝔞_N-1 (or affine 𝔞_N-1) (see e.g. <cit.> for recent applications). Using this identification, specifically for the choice m=nϵ we show in a large series of examples that 𝒵_N,M(m=nϵ,ϵ) can be written as a sum over weights that form a single irreducible (or integrable in the affine case) representation of the Lie algebra 𝔞_N-1 (affine 𝔞_N-1). In the basis of the fundamental weights, the highest weight of these representations is given by [Mn^2,…,Mn^2]. Furthermore, each summand in the sum over weights is a quotient of Jacobi theta functions transforming with a well-defined index under an SL(2,ℤ) symmetry corresponding to the elliptic fibration of X_N,M. Based on an extensive list of examples of different brane configurations (and choices for n∈ℕ) we find a pattern for all these symmetries that allows us to formulate precise conjectures for generic values (N,M) and n.Finally, the compact M-brane configurations (i.e. where the M5-branes separated along 𝕊^1 rather than ℝ) enjoy a duality upon exchanging M↔ N as can be seen directly from the web diagram of X_N,M. For the simplest[We expect that similar results hold true for generic values (N,M).] such configuration (i.e. N=2=M) we show explicitly that the partition function can be written as a double sum over integrable representations of affine 𝔞_N-1 and 𝔞_M-1 respectively. The latter not only makes the algebraic structures but also the duality manifest. Since compact brane setups of the type (N,M) capture <cit.> a class of little string theories (see <cit.> for various different approaches as well as <cit.> for reviews) with 𝒩=(1,0) supersymmetry we expect that these findings will turn out useful for the further study of little string theoriesin general, in particular their symmetries and dualities (see e.g. <cit.> for a recent application).The outline of this paper is as follows. In section <ref> we describe the M-theory brane setup probing a transverse orbifold geometry. We introduce all necessary parameters to describe the configurations and discuss different approaches in the literature to compute the BPS counting functions 𝒵_N,M. Finally, we also discuss the supersymmetry preserved by these configurations (from the point of view of the M-string world-sheet theory) specifically focusing on their holonomy charges as a function of the deformation parameters (m,ϵ_1=-ϵ_2=ϵ). In section <ref> the expression for the topological string partition function is introduced. We furthermore motivate the choice m=nϵ of deformation parameters by exhibit explicitly cancellations in 𝒵_N,M. In section <ref> we present specific examples of non-compact brane setups and rewrite the corresponding partitions functions as sums over Weyl orbits of weights forming specific irreducible representations of 𝔞_N-1≅𝔰𝔩(N,ℂ). In section <ref> we repeat a similar analysis for certain compact brane configuration and rewrite them in a similar manner as sums of Weyl orbits of weights forming integrable representation of the affine Lie algebra 𝔞_N-1≅𝔰𝔩(N,ℂ). Based on the examples of the previous two sections, in section <ref> we give a general expression for the compact partition functions 𝒵_N,M (for generic (N,M)) as a sum over integrable representations of 𝔞_N-1. The non-compact partition functions in turn are obtained by an appropriate decompactification limit. Finally section <ref> contains our conclusions. Several supplementary computations as well as additional information on simple and affine Lie algebras and their representations are relegated to 5 appendices.§ M-BRANE CONFIGURATIONS AND CALABI-YAU MANIFOLDSIn this paper we consider theories which can be described through particular BPS configurations of M-branes. In the following subsection we provide a review of these M-brane webs and relate them to a class of toric Calabi-Yau threefolds in section <ref>. §.§ M-Brane WebsIn the following we describe configurations of parallel M5-branes with M2-branes stretched between them. Depending on whether the M5-branes are separated along 𝕊^1 or ℝ, we call these configurations either compact or non-compact.§.§.§ Non-Compact Brane WebsWe first discuss non-compact brane webs in M-theory compactified on 𝕋^2×ℝ^4_||×ℝ×ℝ^4_⊥ (with coordinates x^0,…, x^10) and consider a configuration of N M5- and K M2-branes as shown in table <ref>.Here the M5-branes are spread out along the x^6 direction and we denote their positions a_a with a=1,…, N (such that a_a< a_b for a<b). For explicit computations we introduce the N-1 distances between adjacent M5-branes ast_f_i= a_a+1- a_a ,∀ a=1,…, N-1 .which typically appear in the form of Q_f_a=e^-t_f_a/R_0 ,∀ a=1,…, N-1 .Furthermore, we also denote the t_f_a collectively as t=(t_f_1,…, t_f_N-1). The M2-branes are stretched between adjacent M5-branes and their two-dimensional intersections have been termed M-strings in <cit.>. Furthermore, denoting the radius of 𝕊_0^1 and 𝕊_1^1 by R_0 and R_1 respectively (i.e. x^0∼ x^0+2π R_0 and x^1∼ x^1+2π R_1) weintroduce the parameterτ:= i R_0/R_1andQ_τ=e^2π iτ . §.§.§ Compact Brane WebsBy arranging the M5-branes on a circle rather than on ℝ, we obtain compact M-brane configurations. Specifically, we replace the ℝ along direction x^6 by 𝕊^1_6 with radius R_6 (i.e. x^6∼ x^6+2π R_6), as shown in table <ref>.As before, we denote the N positions of the M5-branes on 𝕊_6^1 by a_a (with a=1,…, N) which satisfy the relation0≤ a_1≤ a_2≤…≤ a_N≤ 2π R_6 ,and introduce the N distance between adjacent branes ast_f_a={[ a_a+1- a_afor a=1,…, N-1 ,; 2π R_6-( a_N- a_1)fora=N . ].As in the non-compact case, we also introduceQ_f_a=e^- t_f_a/R_0 ,∀ a=1,…, N ,along with the parameter[We use the definition (<ref>) also in the compact case.]ρ:=i R_6/R_0andQ_ρ=e^2π iρ .Notice the following relation ρ=i/2π∑_a=1^N t_f_aR_0andQ_ρ=Q_f_1 Q_f_2… Q_f_N .With this notation, the non-compact brane configurations are obtained in the limit ρ→ 0.§.§.§ Deformation ParametersComputing the partition functions for the brane configurations introduced above, the latter are typically divergent. To circumvent this problem, one can introduce various deformation parameters <cit.>. Indeed, the underlying geometries allow for two different types of U(1) twists. Upon introducing the complex coordinates for ℝ_|| and ℝ_⊥z^1=x^2+ix^3 ,z^2=x^4+ix^5 ,w^1=x^7+ix^8 ,w^2=x^9+ix^10 .we can define * ϵ-deformation:As we go around the compact x^0-direction (i.e. the circle 𝕊_0^1) we can twist byU(1)_ϵ_1× U(1)_ϵ_2 : (z_1,z_2)⟶ (e^2 π i ϵ_1z_1,e^2 π i ϵ_2z_2) ,(w_1,w_2)⟶ (e^-ϵ_1+ ϵ_2/2w_1, e^-ϵ_1+ ϵ_2/2w_2) .From the point of view of supersymmetric gauge theories which can be associated with the brane configurations described above (see <cit.>) this deformation introduces the Ω-background <cit.> allowing to compute the partition functions in an efficient manner.* mass deformation:As we go around the compact x^1-direction (i.e. the circle 𝕊^1_1) we can twist by:U(1)_m : (w_1,w_2) ⟶ (e^2 π i mw_1,e^-2 π i mw_2) .As we shall briefly discuss further below, from the perspective of the gauge theories (that are engineered from a dual type II setup), this deformation parameter corresponds to a mass for certain hypermultiplet fields.The action of the deformation parameters ϵ_1,2 and m can be schematically represented in table <ref>.The former regularise divergences in the partition function coming from contributions of the non-compact dimensions while at the same time breaking part of the supersymmetries, as we shall discuss in sections <ref> and <ref> respectively. Finally, we remark that in the later sections of this paper, the parameters ϵ_1,2 and m appear throughQ_m=e^2π im ,q=e^2 π iϵ_1 ,t=e^-2 π iϵ_2 . §.§ Orbifolds of M-brane webs§.§.§ Orbifold Action and Brane Web ParametersA generalisation of the above M-brane configurations has been discussed in <cit.> (see also <cit.>). Indeed, upon considering M5-branes probing an orbifold geometry (rather than ℝ^4_⊥), the positions of the M2-branes can be separated in the transverse direction.Specifically, we generalise ℝ^4_⊥ to an Asymptotically Locally Euclidean space of type A_M-1 (which we denote by ALE_A_M-1) for M∈ℕ, which can be obtain as the following orbifoldALE_A_M-1=ℝ^4_⊥/ℤ_M ,withℤ_M: {[ w_1⟼ e^2π in/M w_1; w_2⟼ e^-2π i n/M w_2 ].forn=0,…,M-1 .As explained in <cit.>, the twists (<ref>) and (<ref>), which introduce the deformation parameters ϵ_1,2 and m, are compatible with the ALE_A_M-1 geometry. Indeed, when viewed as an 𝕊^1 fibration over ℝ^3, the latter posses two distinct U(1) isometries related to the fiber and base respectively. Therefore, the generalised M-brane configuration (including the deformation parameters ϵ_1,2 and m) can be represented by table <ref>, where we again allowed for the possibility of arranging the M5-branes along the x^6-direction either on ℝ or on 𝕊_6^1.As in the case M=1, the distances between the M5-branes along the direction x^6 give rise to N parameters t_f_a for a=1,…,N (see eq. <ref>). The case |t_f_N|<∞ corresponds to a compact brane configuration (i.e. the direction x^6 is compactified on 𝕊_6^1 with finite radius R_6), while the limit |t_f_N|→∞ corresponds to a non-compact brane configuration (i.e. the direction x^6 is non-compact). As explained in <cit.>, besides the (t_f_a,m,ϵ_1,2), the orbifolded configuration allows for another set of parameters, corresponding to the expectation values T_i (for i=1,…,M) of the M-theory three-form along 𝕊_1^1×𝒞_i, where 𝒞_i is a basis of the 2-cycles of ALE_A_M-1. In later computations, these parameters typically appear in the formQ̅_i=e^-T_i ,∀ i=1,…,M .Furthermore, the parameters τ and ρ (see (<ref>) and (<ref>) respectively) are in this duality frame given byτ=i/2π∑_i=1^M T_i ,andρ=i/2π∑_a=1^N t_f_aR_0 ,which is equivalent toQ_τ=Q̅_1…Q̅_M ,andQ_ρ=Q_f_1… Q_f_N .The full orbifolded M-brane configuration is finally parametrised by (t_f_1,…,t_f_N,T_1,…,T_M,m,ϵ_1,2), which we denote more compactly by (𝐭,𝐓,m,ϵ_1,2). §.§.§ Type II DescriptionThe parameters introduced in the above M-brane configurations can be given a more geometric interpretation when dualising to the corresponding type II picture. Indeed, upon reducing the orbifold M-theory configuration along 𝕊_1^1, it can be dualised into a web of intersecting D5- and NS5-branes as shown in table <ref>, where we represented the ALE_A_M-1 space as a (particular limit of a) fibration of 𝕊^1_7 over ℝ^3_⊥ (see <cit.>for more details). While the parameters ϵ_1,2 can be introduced in the same fashion as in the M-theory case, the parameter m can no longer be interpreted as a U(1) deformation (since the corresponding circle 𝕊_1^1 is no longer present). The latter is introduced by giving mass m to the bifundamental hypermultiplets corresponding to strings stretched between the D5- and NS5-branes. At the level of the brane web, it corresponds to a deformation with (1,1) branes in the (x^6,x^7)-plane, as shown in figure <ref>. This figure also shows the remaining parameters (𝐭,𝐓) as the distances of the D5- and NS5-branes in the x^6 and x^7 direction respectively.[For latter convenience, we adopt the convention that the T_i are counted in units of R_0.]As discussed in <cit.> choosing the deformation parameter m to be the same for all intersections of D5-NS5-branes is not the most general case since a generic such brane web has NM+2 independent parameters. In the following, however, we focus on this simpler case, where all mass deformations are the same (as indicated in Figure <ref>). §.§.§ Toric Calabi-Yau ManifoldsThere is a further description of the theories introduced above. Indeed, as explained in <cit.>, one can associate a toric non-compact Calabi-Yau 3-fold (CY3fold) X_N,M with the 5-brane web. More precisely, the web diagram shown in Figure <ref> can be interpreted as the dual of the Newton polygon which encodes how X_N,M is constructed from ℂ^3 patches.A generic X_N,M can be described as a ℤ_N×ℤ_M orbifold of X_1,1. The latter is a Calabi-Yau threefold that resembles the geometry of the resolved conifold at certain boundary-regions of its moduli space (i.e. upon sending τ,ρ→∞).[Orbifolds of the latter have for example been studied in <cit.>.] More importantly, X_N,M has the structure of a double elliptic fibration: it can be understood as an elliptic fibration over the affine A_N-1 space, which (as already mentioned) itself is an elliptic fibration. The two elliptic parameters are ρ and τ, which were introduced in (<ref>). The remaining parameters (t_f_1,…,t_f_N-1), (T_1,…,T_M-1) as well as m correspond to further Kähler parameters of X_N,M. We shall further elaborate on the interpretation of the parameters ϵ_1,2 from the point of view of the Calabi-Yau manifold once we discuss the topological partition function on X_N,M in section <ref>.The double elliptic fibration structure of X_N,M corresponds to the presence of two SL(2,ℤ) symmetries which act separately on the modular parameters τ and ρ. Particularly for the case M=1 we have the following action on the various parameters <cit.>SL(2,ℤ)_τ : (τ,ρ,m,t_f_1,…,t_f_N-1,ϵ_1,ϵ_2)⟶(aτ+bcτ+b,ρ,mcτ+d,t_f_1,…,t_f_N-1,ϵ_1cτ+d,ϵ_2cτ+d) , SL(2,ℤ)_ρ : (τ,ρ,m,t_f_1,…,t_f_N-1,ϵ_1,ϵ_2)⟶(τ,aρ+bcρ+b,mcτ+d,t_f_1cρ+d,…,t_f_N-1cρ+d,ϵ_1cρ+d,ϵ_2cρ+d) ,where ([ a b; c d ])∈ SL(2,ℤ), i.e. a,b,c,d∈ℤ and ad-bc=1.§.§ SupersymmetryIn order to discuss the amount of supersymmetry preserved by the M-brane configurations described above, we adopt the point of view of the M-string <cit.>: for a configuration of parallel M5-branes probing a flat ℝ^4_⊥ with M2-branes stretched between them (i.e. configurations with M=1), the M-string preserves 𝒩=(4,4) supersymmetry with R-symmetry group Spin_R(4). The latter acts on the space ℝ^4_⊥ transverse to the M5-branes. The supercharges <cit.> transform as the representations(2,1,2,1)_+⊕ (1,2,1,2)_- ,under Spin(4)× Spin_R(4)× Spin(1,1), where Spin(4)× Spin(1,1) is the Lorentz group on the M5 world-volume (with Spin(1,1) the Lorentz-group on the world-volume of the M-string) and the ± subscript denotes the chirality with respect to Spin(1,1). As was explained in <cit.>, upon introducing the simple roots of Spin(8)⊃ Spin(4)_R× Spin(4)u_1=e_1-e_2 ,u_2=e_2-e_3 ,u_3=e_3-e_4 ,u_4=e_3+e_4 ,the weight vectors of the preserved supercharges are(2,1,2,1)_+ : {e_1+e_2+e_3+e_4/2 ,e_1+e_2-e_3-e_4/2 ,-e_1+e_2-e_3-e_4/2 ,-e_1+e_2+e_3+e_4/2} ,(1,2,1,2)_- : {e_1-e_2+e_3-e_4/2 ,e_1-e_2-e_3+e_4/2 ,-e_1-e_2+e_3-e_4/2 ,-e_1-e_2-e_3+e_4/2} .Furthermore, as discussed in <cit.>, the orbifold action (<ref>) is not compatible with all 8 supercharges and indeed only (2,1,2,1)_+ (i.e. the supercharge with positive chirality) is invariant. Therefore, for configurations with M>1, supersymmetry is broken to 𝒩=(4,0). The latter is in general further reduced by the deformations (<ref>): while the mass deformation (<ref>) (which acts in a similar manner on ℝ^4_⊥ as the ℤ_M orbifold (<ref>)) breaks the same supercharges as the orbifold action (and leaves invariant all of (2,1,2,1)_+), the ϵ-deformation in general[In the unrefined case (i.e. for ϵ_1=-ϵ_2), in fact all supercharges (2,1,2,1)_+ remain invariant, such that the supersymmetry remains 𝒩=(4,0).] only leaves the supercharges corresponding to e_1+e_2+e_3+e_4/2 ,and-e_1+e_2+e_3+e_4/2 ,invariant. It therefore reduces the supersymmetry to 𝒩=(2,0). § PARTITION FUNCTIONS §.§ Compact and Non-Compact M-brane ConfigurationsAn important quantity to describe the different M-brane configurations introduced above is the partition function 𝒵_N,M that counts BPS states. The latter can be weighted by fugacities related to the various symmetries described above. Concretely, the partition functions can be computed in various different manners, as explained in <cit.> * Topological string partition functionThe partition function 𝒵_N,M is captured by the (refined) topological string partition function on the toric Calabi-Yau threefold X_N,M. The latter can efficiently be computed using the (refined) topological vertex <cit.>* M-string partition function𝒵_N,M can also be computed as the M-string partition function. For configurations (N,1) (i.e. for M=1) it was shown in <cit.> that the latter can be obtained as the elliptic genus of a sigma model with 𝒩=(2,0) supersymmetry whose target space is a product of Hilb[ℂ^2], the Hilbert scheme of points in ℝ^4. This result was generalised in <cit.> to the case M>1 where it was shown that 𝒵_N,M can be computed as the elliptic genus of a sigma model with 𝒩=(2,0) supersymmetry whose target space is given by ℳ(r,k), the moduli spaces of U(r) instantons of charge k. * Nekrasov instanton calculusThe partition function can also be obtained from the 5-dimensional gauge theory that lives on the world-volume of the D5-branes in the type II brane-web description (see section <ref>). The non-perturbative partition function of the latter can be computed using Nekrasov's instanton calculus on the Ω-background <cit.>.* BPS scattering amplitudes in type II string theoryAs discussed in <cit.>, certain of the partition functions 𝒵_N,M can also be obtained from a specific class of higher derivative scattering amplitudes in type II string theory.Using either of these approaches, the partition function for a compact (i.e. Q_ρ≠ 0) brane configuration (N,M) can be written in the following manner<cit.>𝒵_N,M(𝐓, 𝐭,m,ϵ_1,ϵ_2)= W_M(∅)^N∑_α_i^(a)Q_ρ^∑_i=1^M|α_i^(N)|(∏_a=1^NQ_f_a^∑_i=1^M(|α_i^(a)|-|α_i^(N)|)) (∏_a=1^N∏_i=1^Mϑ_α_i^(a+1)α_i^(a)(Q_m;τ)/ϑ_α_i^(a)α_i^(a)(√(t/q);τ))×(∏_1≤ i<j≤ M∏_a=1^Nϑ_α_i^(a)α_j^(a+1)(Q_ijQ_m^-1;τ) ϑ_α_i^(a+1)α_j^(b)(Q_ijQ_m;τ)/ϑ_α_i^(a)α_j^(a)(Q_ij√(t/q);τ) ϑ_α_i^(a)α_j^(a)(Q_ij√(q/t);τ))|_α_i^(1)=α_i^(N+1) ,where α_i^(a) are NM integer partitions (with size |α_a^(i)|) and α_i^N+1=α_i^(1) andQ_ij=Q̅_iQ̅_i+1…Q̅_j-1 ,for1≤ i<j≤ M .Furthermore, for two integer partitions μ=(μ_1,…,μ_ℓ_1) and ν=(ν_1,…,ν_ℓ_2) of length ℓ_1,2 respectively, we haveϑ_μν(x;τ)=∏_(i,j)∈μϑ(x^-1q^-μ_i+j-12t^-ν_j^t+i-12;τ) ∏_(i,j)∈νϑ(x^-1q^ν_i-j+12t^μ^t_j-i+12;τ) .Here (i,j) denotes the position of a given box in the Young diagram of the partitions μ and ν respectively, μ^t denotes the transposed partition of μ and ϑ(x;τ)=iθ_1(τ,x)/Q_τ^1/8∏_k=1^∞(1-Q_τ^k) ,where θ_1(τ;x) (for x=e^2π iz) is the Jacobi theta-functionθ_1(τ;z) = 2Q_τ^1/8sin(π z) ∏_n=1^∞(1-Q_τ^n)(1-xQ_τ^n)(1-x^-1Q_τ^n) .Finally, the factor W_M(∅) in (<ref>) is defined asW_M(∅;𝐓,m,ϵ_1,ϵ_2)=lim_ρ→ i∞𝒵_1,M(𝐓,ρ,m,ϵ_1,ϵ_2) ,and we also introduce the normalised partition function𝒵_N,M(𝐓,𝐭,m,ϵ_1,ϵ_2)=𝒵_N,M(𝐓,𝐭,m,ϵ_1,ϵ_2)/W_M(∅)^N .The latter was related in <cit.> to an U(N)^M gauge theory (which is dual to an U(M)^N gauge theory), as well as (five-dimensional) little string theory. For the explicit computations in the remainder of this work it is more convenient to rewrite the partiton function in the following form:𝒵_N,M(𝐓,𝐭,m,ϵ_1,ϵ_2)=W_M(∅)^N∑_α_i^(a)( ∏_a=1^N (-Q_f_a)^∑_i=1^M |α_i^(a)|) ( ∏_a=1^N ∏_k=1^M ∏_(i,j) ∈α_k^(a)θ_1(τ;z_k,ij^(a+1)) θ_1(τ;v_k,ij^(a-1))/θ_1(τ;u_k,ij^(a)) θ_1(τ;w_k,ij^(a)))×∏_a=1^N ∏_1 ≤ k < l ≤ M( ∏_(i,j) ∈α_k^(a)θ_1(τ;z_l,ij^(a+1)+T̃_kl) θ_1(τ;v_l,ij^(a-1)-T̃_kl)/θ_1(τ;u_l,ij^(a)+T̃_kl) θ_1(τ;w_l,ij^(a)+T̃_kl)) ×( ∏_(i,j) ∈α_l^(a)θ_1(τ;z_k,ij^(a+1)-T̃_kl) θ_1(τ;v_k,ij^(a-1)+T̃_kl)/θ_1(τ;u_k,ij^(a)-T̃_kl) θ_1(τ;w_k,ij^(a)-T̃_kl)) .Here we introducedT̃_i=i/2π T_i ,andT̃_kl=T̃_k+T̃_k+1+…+T̃_l-1 ,for[ k,l=1,…, M; k≤ l ]and the arguments of the Jacobi-theta functions in (<ref>) are given by:z_k,ij^(a) =-m+ϵ_1(α_k,i^(a)-j+12)-ϵ_2((α^(a+1)_k,j)^t-i+12) ,v^(a)_k,ij =-m-ϵ_1(α_k,i^(a)-j+12)+ϵ_2((α^(a-1)_k,j)^t-i+12) ,w^(a)_k,ij =ϵ_1(α_k,i^(a)-j+1)-ϵ_2((α^(a)_k,j)^t-i) ,u^(a)_k,ij =ϵ_1(α_k,i^(a)-j)-ϵ_2((α^(a)_k,j)^t-i+1)Specifically, for M=1 we have the following expression𝒵_N,1(τ,t_f_1,…,t_f_N,m,ϵ_1,ϵ_2) =∑_ν_1,, ν_N(∏_a=1^N(-Q_f_a)^|ν_a|) ∏_a=1^N∏_(i,j)∈ν_(a)θ_1(τ;z^(a)_ij) θ_1(τ;v^(a)_ij)/θ_1(τ;w^(a)_ij)θ_1(τ;u^(a)_ij) ,where we introduced the following shorthand notation for the arguments of the Jacobi theta-functionsz_ij^(a) =-m+ϵ_1(ν_a,i-j+12)-ϵ_2(ν^t_a+1,j-i+12) ,v^(a)_ij =-m-ϵ_1(ν_a,i-j+12)+ϵ_2(ν^t_a-1,j-i+12) ,w^(a)_ij =ϵ_1(ν_a,i-j+1)-ϵ_2(ν^t_a,j-i) ,u^(a)_ij =ϵ_1(ν_a,i-j)-ϵ_2(ν^t_a,j-i+1) .fora=1,…, N . The partition function for non-compact brane webs (which we denote 𝒵^line_N,M) can be obtained from (<ref>) through the limit Q_ρ→ 0 (i.e. ρ→ i∞):𝒵^line_N,M(𝐓,𝐭,m,ϵ_1,ϵ_2)=W_M(∅)^N∑_α_i^(a)Q_ρ^∑_i=1^M|α_i^(N)|(∏_a=1^NQ_f_a^∑_i=1^M(|α_i^(a)|-|α_i^(N)|)) (∏_a=1^N∏_i=1^Mϑ_α_i^(a+1)α_i^(a)(Q_m;τ)/ϑ_α_i^(a)α_i^(a)(√(t/q);τ))×(∏_1≤ i<j≤ M∏_a=1^Nϑ_α_i^(a)α_j^(a+1)(Q_ijQ_m^-1;τ) ϑ_α_i^(a+1)α_j^(b)(Q_ijQ_m;τ)/ϑ_α_i^(a)α_j^(a)(Q_ij√(t/q);τ) ϑ_α_i^(a)α_j^(a)(Q_ij√(q/t);τ))|_α_i^(0)=α_i^(N)=∅ ,where 𝐭={t_f_1,…,t_f_N-1} and 𝐓={T_1,…,T_M}. Specifically for M=1 we have𝒵^line_N,1(τ,t_f_1,…,t_f_N-1,m,ϵ_1,ϵ_2) =∑_ν_1,, ν_N-1ν_0=ν_N=∅(∏_a=1^N-1(-Q_f_a)^|ν_a|) ∏_a=1^N-1∏_(i,j)∈ν_aθ_1(τ;z^a_ij) θ_1(τ;v^a_ij)/θ_1(τ;w^a_ij)θ_1(τ;u^a_ij) ,where the arguments (z_ij^(a),v_ij^(a),w_ij^(a),u_ij^(a)) for a=1,…,N-1 are the same as in (<ref>).§.§ Particular Values of the Deformation ParametersViewed as a BPS counting function (<ref>) (and its non-compact counterpart (<ref>)) depend on the fugacities (𝐓,𝐭,m,ϵ_1,ϵ_2) that refine various symmetries associated with the (N,M) brane-web. We can summarise the latter in the following tableparameter symmetry, compact case symmetry, non-compact case 𝐓={τ,T_1,… T_M-1} SL(2,ℤ)_τ SL(2,ℤ)_τ 𝐭 A_N-1 A_N-1 m U(1)_m U(1)_m ϵ_1 ,ϵ_2 U(1)_ϵ_1× U(1)_ϵ_2 U(1)_ϵ_1× U(1)_ϵ_2 Here SL(2,ℤ)_τ is a generalisation of (<ref>) to the case M>1 (τ,T_1,…,T_M-1,ρ,m,t_f_1,…,t_f_N-1,ϵ_1,ϵ_2)⟶(aτ+bcτ+b,T_1cτ+d,…,T_M-1cτ+d,ρ,mcτ+d,t_f_1,…,t_f_N-1,ϵ_1cτ+d,ϵ_2cτ+d) .From the point of view of the Calabi-Yau manifold X_N,M (described in section <ref>), the 𝐭 are Kähler parameters associated with the structure of an elliptic fibration over (affine) A_N-1. From the point of view of the M-brane web, the connection of the 𝐭 to (affine) A_N-1 seems less clear, since the former correspond to the distances of the M5-branes along the (non-)compact x^6 direction. However, as remarked in e.g. <cit.>, the structure of the M5-branes along this direction can be interpreted as the Dynkin diagram of 𝔞_N-1 (or its affine extension 𝔞_N-1) and the Q_f_a can be linked to the roots of these algebras respectively. Indeed, we will explain this connection in more detail in the following sections, when considering explicit examples of the partition functions 𝒵_N,M. Finally, we notice that in the compact case, the roles of 𝐓 and 𝐭 can be exchanged upon replacing (N,M)⟶ (M,N). In the above table the parameters 𝐭 have been singled out since we have decided to write 𝒵_N,M in (<ref>) as a power series expansion in Q_f_a (rather than Q̅_i).[From the point of view of the (refined) topological vertex (which was used to compute the topological string partition function 𝒵_N,M), this corresponds to a particular choice of the preferred direction of the vertex. In the current case, the latter has been chosen horizontally with respect to Figure <ref>.]Written as a function of all parameters mentioned above 𝒵_N,M is rather complicated and very difficult to analyse. In this paper we therefore consider particular values for some of the parameters, such that 𝒵_N,M simplifies and the various symmetries can be made more manifest. First, for simplicity, we choose to work in the unrefined case, i.e. we setϵ_1=-ϵ_2=ϵ ,which (as mentioned in section <ref>) leads to an enhancement of supersymmetry to 𝒩=(4,0). Furthermore, (<ref>) is fully compatible with the symmetries SL(2,ℤ)_τ as well as A_N-1 (or A_N-1).In order to further define regions in the parameter space in which the partition function simplifies, we first consider the case M=1. In this case, the Spin(8) holonomy charges corresponding to the deformations (<ref>) and (<ref>) read(ϵ,-ϵ,m,-m) ,where we recall that the first two entries (depending on ϵ) correspond to a holonomy with respect to 𝕊^1_0 and the last two (depending on m) with respect to 𝕊^1_1. For generic values of ϵ and m (in particular for m/ϵ∈ℝℕ) there is no cancellation between the corresponding holonomy phases. Phrased differently, there is no mixing between states with distinct charges under U(1)_m and U(1)_ϵ_1× U(1)_ϵ_2 in the partition function. However, if we choosem=nϵ ,with n ∈ℕ ,the holonomy charges are no longer linear independent over ℤ and thus holonomy phases may cancel when we go multiple times around the circle 𝕊^1_0. In this way, there may be non-trivial cancellations between the contributions of states with distinct charges under U(1)_m and U(1)_ϵ_1× U(1)_ϵ_2 in the partition function 𝒵_N,1 leading to possible simplifications of 𝒵_N,1.[Notice that for n=1, in addition to (2,1,2,1)_+ the (anti-chiral) supercharges with the weight vectors e_1-e_2-e_3+e_42 and -e_1-e_2-e_3+e_42 remain unbroken, thus leading to an enhancement of supersymmetry. This fact was already remarked in <cit.> for the more generic case m=ϵ_1-ϵ_22.] For M>1, the same effect appears (at least) in the untwisted sector of the orbifold, such that we expect similar simplifications. Finally, we also remark that the choice (<ref>) is still compatible with SL(2,ℤ)_τ as well as A_N-1 (or A_N-1). Therefore, we can analyse the simplified partition functions 𝒵_N,M(𝐓,𝐭,m=nϵ,ϵ,-ϵ) with respect to these symmetries and write them in a fashion that makes them manifest. Explicitly, at the level of the partition function, the reason for the above mentioned simplifications is the following: when choosing the parametersϵ_1=-ϵ_2=ϵ ,andm=nϵ ,for n ∈ℕ ,the arguments (<ref>) of the theta-functions in (<ref>) take the following formz_k,ij^(a)=ϵ(α_k,i^(a)+(α^(a+1)_k,j)^t-i-j+1-n) ,v^(a)_k,ij=-ϵ(α_k,i^(a)+(α^(a-1)_k,j)^t-i-j+1+n) ,w^(a)_k,ij=u^(a)_k,ij=ϵ(α_k,i^(a)+(α^(a)_k,j)^t-i-j+1) .For specific partitions α^(a)_k these combinations may become zero even for generic ϵ, thereby (with θ_1(τ;0)=0) leading to a vanishing contribution to the partition function. We also notice that for (<ref>) in general w_k,ij^(a)≠ 0≠ u_k,ij^(a): indeed, the coordinates (i,j) of the boxes in a given Young diagram are bounded from above by (α_k,j^(a))^t and α_k,i^(a) respectively, so w_a,ij^(k),u_a,ij^(k)≥ 1 as can be seen from (<ref>). Therefore, there are no divergences coming from the denominator of (<ref>) and 𝒵_N,M(𝐓,𝐭,m=nϵ,ϵ,-ϵ) is well defined for n∈ℕ.In the following we discuss specific examples of partition functions with the choice of parameters (<ref>) and analyse their symmetries.§ EXAMPLES: NON-COMPACT BRANE CONFIGURATIONS§.§ Configuration (N,M)=(2,1)§.§.§ Choice ϵ_1=-ϵ_2=ϵ and m=ϵWe start with the non-compact configuration (N,M)=(2,1) for which the partition function (<ref>) is a sum over a single partition ν_1. For the choice of the deformation parameters ϵ_1=-ϵ_2=m=ϵ we can show that the only integer partitions ν_1 contributing to the partition function 𝒵_2,1^line(t_f_1,m=ϵ,ϵ,-ϵ) are in fact ν_1=∅ and ν_1=1. To see this, we recall that θ_1(τ;0)=0 such that only those partitions ν_1 contribute for which (see (<ref>) for the definitions of z_ij^(a) and v_ij^(a))z_ij^(1)≠0 ,and v_ij^(1)≠ 0 ,∀ (i,j)∈ν_1 . Starting from a generic partition ν_1=(ν_1,1 ,ν_1,2 ,… ,ν_1,ℓ) of length ℓ, the condition (<ref>) can be checked explicitly. In particular, we can consider the following two particular boxes: * the last box in the second row (i.e. (i,j)=(2,ν_1,2)):1.35cm7, 5, 4,4,3,1 *[*(black)]0,5+1,0,0,0,0For this particular box we have v_2,ν_1,2^(1)=ϵ(ν_2-2-ν_2+2)=0, such that all partitions with ℓ≥ 2 violate (<ref>) and therefore do not contribute to the partition function (<ref>). * (ν_1-1)th box in the first row (i.e. (i,j)=(1,ν_1,1-1))Due to the previous constraint the only remaining partitions correspond to Young diagrams with a single row: 1.35cm7 *[*(black)]5+1For this particular box we have z_1,ν_1,1-1^(1)=ϵ(ν_1,1-1-(ν_1,1-1))=0, such that all partitions with ν_1,1≥ 2 violate (<ref>) and do notcontribute to the partition function (<ref>). Combining these two constraints we find that the only possible choices are ν_1=∅ or ν_1=1 and the partition function therefore is 𝒵^line_2,1(τ,t_f_1,m=ϵ,ϵ,-ϵ) =∑_ν∈{∅,1}(-Q_f_a)^|ν| ∏_(i,j)∈νθ_1(τ;z^(1)_ij) θ_1(τ;v^(1)_ij)/θ_1(τ;w^(1)_ij)θ_1(τ;u^(1)_ij) =1-Q_f_1θ_1(τ;-ϵ)θ_1(τ;-ϵ)/θ_1(τ;ϵ)^2=1-Q_f_1 .Notice that the right hand side is independent of τ and ϵ and only depends linearly on Q_f_1.The partition function (<ref>) can be rewritten in fashion that makes an 𝔞_1 symmetry manifest. Indeed, upon identifying Q_f_1=e^-α_1 ,where α_1 is the simple root of 𝔞_1 we can write𝒵_2,1^line(τ,t_f_1,m=ϵ,ϵ,-ϵ)=∏_α∈Δ_+(𝔞_1)(1-e^-α)^mult(α) ,with mult(α_1)=1 and Δ_+(𝔞_1) the space of positive roots of 𝔞_1. Using the Weyl character formula, we can rewrite the product (<ref>) as a sum over the Weyl group 𝒲(𝔞_1)≅ℤ_2 of 𝔞_1𝒵_2,1^line(τ,t_f_1,m=ϵ,ϵ,-ϵ)=∑_w∈𝒲(𝔞_1)(-1)^ℓ(w) e^w( )- ,whereis the Weyl vector of 𝔞_1 and ℓ(w) is the length of w∈𝒲(𝔞_1)≅ℤ_2, i.e. the number of Weyl reflections that w is decomposed of. While the re-writings (<ref>) and (<ref>) seem trivial (due to the fact that the root space of 𝔞_1 is one-dimensional, i.e. Δ_+(𝔞_1)={α_1}), we shall see that both equations can be directly generalised for other choices m=nϵ (with n>1) and also N>2 (as we shall discuss in section <ref>).§.§.§ Choice ϵ_1=-ϵ_2 and m=nϵ for n>1For the cases n>1 we can repeat the above analysis to find all partitions that yield a non-vanishing contribution to the partition function (<ref>). In doing so, we find a generic pattern, which can be summarised as follows:[This pattern has explicitly been checked up to n=11 and we conjecture it to hold for generic n∈ℕ.] only those partitions ν_1=(ν_1,1 ,ν_1,2 ,… ,ν_1,ℓ) withℓ≤ n ,andν_1,a≤ n ,∀ a=1,…,n ,satisfy (<ref>). As a consequence, we propose that the partition function is a polynomial in Q_f_1 and can be written as the finite sum𝒵_2,1^line(τ,t_f_1,m=nϵ,ϵ,-ϵ)=∑_k=0^n^2(-1)^k c^(n)_k(τ,ϵ) Q_f_1^k .For n>1 the coefficients c_k^(n) depend explicitly on τ and ϵ and have the propertyc_k^(n)(τ,ϵ)=(-1)^n c_n^2-k^(n)(τ,ϵ) .Explicit expressions for the first few c_k^(n) with the condition[If k does not satisfy (<ref>), the corresponding coefficient is determined by (<ref>).]k≤{[ n^2/2+1 …n even; n^2+1/2 … n odd ].are given by (we recall that relations (<ref>) – (<ref>) have in fact been checked explicitly up to n=11):c_0^(n) =1 ,c_1^(n) =θ(n)^2/θ(1)^2 ,c_2^(n) =2 θ(n-1)θ^2(n)θ(n+1)/θ(1)^2θ(2)^2 ,c_3^(n) =θ(n-1)^2θ(n)^2θ(n+1)^2/θ(1)^4θ(3)^2+2 θ(n-2)θ(n-1)θ(n)^2θ(n+1)θ(n+2)/θ(1)^2θ(2)^2θ(3)^2 ,c_4^(n) =θ(n-1)^2θ(n)^4θ(n+1)^2/θ(1)^2θ(2)^4θ(3)^2+2 θ(n-2)θ(n-1)^2θ(n)^2θ(n+1)^2θ(n+2)/θ(1)^4θ(2)^2θ(4)^2+2 θ(n-3)θ(n-2)θ(n-1)θ(n)^2θ(n+1)θ(n+2)θ(n+3)/θ(1)^2θ(2)^2θ(3)^2θ(4)^2 ,c_5^(n) =θ(n-2)^2θ(n-1)^2θ(n)^2θ(n+1)^2θ(n+2)^2/θ(1)^4θ(2)^4θ(5)^2+2 θ(n-2)θ(n-1)^2θ(n)^4θ(n+1)^2θ(n+2)/θ(1)^4θ(2)^2θ(3)^2θ(4)^2+2 θ(n-3)θ(n-2)θ(n-1)^2θ(n)^2θ(n+1)^2θ(n+2)θ(n+3)/θ(1)^4θ(2)^2θ(3)^2θ(5)^2+2 θ(n-4)θ(n-3)θ(n-2)θ(n-1)θ(n)^2θ(n+1)θ(n+2)θ(n+3)θ(n+4)/θ(1)^2θ(2)^2θ(3)^2θ(4)^2θ(5)^2 ,where for simplicity we have introduced the shorthand notationθ(ℓ):=θ_1(τ,ℓϵ) ,∀ℓ∈ℕ .While not constant (as in the case of n=1), the coefficients c_k^(n) display a clear pattern, which we propose to hold for generic (k,n) satisfying (<ref>): every coefficient itself can be written in the formc_k^(n)(τ,ϵ)=∑_μ(k,n)=(μ_1,…,μ_ℓ)|μ|=k+1 c(μ) θ(n)^μ_1∏_a=2^ℓ(θ(n-a+1)θ(n+a-1))^μ_a/f(μ;ϵ,τ) ,where the sum is over partitions μ(k,n)=(μ_1(k,n) ,μ_2(k,n) ,… ,μ_ℓ(k,n)) of length ℓ (with 0≤ℓ≤ k+1) and f(μ) is a product of theta functionsf(μ;ϵ,τ)=∏_i=1^kθ(i)^r_i ,with r_i∈ℕ_even and r_i≤ 4 that satisfy∑_i=1^k r_i=2k ,andμ_1n^2+2∑_a=2^ℓ[n^2+(a-1)^2]μ_a-∑_j=1^k r_j j^2=2k(n^2-k) .Here the first condition states that the number of θ_1-functions in the numerator and denominator of (<ref>) is the same, while the second condition ensures that each coefficient c_k^(n)(τ,ϵ) transforms in an appropriate manner under SL(2,ℤ)_τ transformations (see (<ref>)). Specifically, we havec^(n)_k(-1/τ,ϵ/τ)=e^2π iϵ^2/τ k(n^2-k) c^(n)_k(τ,ϵ) .Thus, we can assign an index under SL(2,ℤ)_τ to each of the c^(n)_kℐ_τ(c^(n)_k)=k(n^2-k) .Finally, the c(μ) in (<ref>) are numerical coefficients which take values c(μ)∈{0,1,2}.While the expressions for c_k^(n)(τ,ϵ) in (<ref>) are rather complicated, they are essentially determined by specifying all partitions μ(n,k) for which c(μ)≠ 0. These can be obtained from the partitions μ(n,k-1) in an algorithmic fashion by increasing one of the μ_a(k-1,n) by either 1 or 2. The precise relation (along with explicit examples up to k=5) is explained in appendix <ref> and can be summarised by the fact that there is an operator R_+ such thatc_k^(n)(τ,ϵ)=R_+ c_k-1^(n)(τ,ϵ) .Schematically, the action of R_+ can be represented graphically in the following manner 1.410cm [->] (-4,0) – (4,0); [ultra thick] (0,-0.25) – (0,0.25); [red] at (-3,0) ∙; [red] at (-2,0) ∙; [red] at (-1,0) ∙; [red] at (0,0) ∙; [red] at (1,0) ∙; [red] at (2,0) ∙; [red] at (3,0) ∙; at (-3,0.4) c^(n)_0; at (-2,0.4) …; [rotate=90] at (-1,0.75) c^(n)_n^2/2-1; [rotate=90] at (0,0.75) c^(n)_n^2/2; [rotate=90] at (1,0.75) c^(n)_n^2/2-1; at (2,0.4) …; at (3,0.4) c^(n)_0;[ultra thick, ->, blue] (-2.9,-0.1) to [out=315,in=180] (-2.5,-0.3) to [out=0,in=225] (-2.1,-0.1); [blue] at (-2.5,-0.5) R_+; [ultra thick, ->, blue, xshift=1cm] (-2.9,-0.1) to [out=315,in=180] (-2.5,-0.3) to [out=0,in=225] (-2.1,-0.1); [blue] at (-1.5,-0.5) R_+; [ultra thick, ->, blue, xshift=2cm] (-2.9,-0.1) to [out=315,in=180] (-2.5,-0.3) to [out=0,in=225] (-2.1,-0.1); [blue] at (-0.5,-0.5) R_+; [ultra thick, ->, blue, xshift=3cm] (-2.9,-0.1) to [out=315,in=180] (-2.5,-0.3) to [out=0,in=225] (-2.1,-0.1); [blue] at (0.5,-0.5) R_+; [ultra thick, ->, blue, xshift=4cm] (-2.9,-0.1) to [out=315,in=180] (-2.5,-0.3) to [out=0,in=225] (-2.1,-0.1); [blue] at (1.5,-0.5) R_+; [ultra thick, ->, blue, xshift=5cm] (-2.9,-0.1) to [out=315,in=180] (-2.5,-0.3) to [out=0,in=225] (-2.1,-0.1); [blue] at (2.5,-0.5) R_+; at (5.5,0) n even; 1.410cm [->] (-4,0) – (4,0); [ultra thick] (0,-0.25) – (0,0.25); at (3,0) |; at (2,0) |; at (1,0) |; at (-1,0) |; at (-2,0) |; at (-3,0) |; [red] at (-3.5,0) ∙; [red] at (-2.5,0) ∙; [red] at (-1.5,0) ∙; [red] at (-0.5,0) ∙; [red] at (0.5,0) ∙; [red] at (1.5,0) ∙; [red] at (2.5,0) ∙; [red] at (3.5,0) ∙; at (-3.5,0.4) c^(n)_0; at (-2.5,0.4) …; [rotate=90] at (-1.5,0.8) c^(n)_(n^2-3)/2; [rotate=90] at (-0.5,0.8) c^(n)_(n^2-1)/2; [rotate=90] at (0.5,0.85) -c^(n)_(n^2-1)/2; [rotate=90] at (1.5,0.85) -c^(n)_(n^2-3)/2; at (2.5,0.4) …; at (3.5,0.45) -c^(n)_0; [ultra thick, ->, blue] (-3.4,-0.1) to [out=315,in=180] (-3,-0.3) to [out=0,in=225] (-2.6,-0.1); [blue] at (-3,-0.5) R_+; [ultra thick, ->, blue, xshift=1cm] (-3.4,-0.1) to [out=315,in=180] (-3,-0.3) to [out=0,in=225] (-2.6,-0.1); [blue] at (-2,-0.5) R_+; [ultra thick, ->, blue, xshift=2cm] (-3.4,-0.1) to [out=315,in=180] (-3,-0.3) to [out=0,in=225] (-2.6,-0.1); [blue] at (-1,-0.5) R_+; [ultra thick, ->, blue, xshift=3cm] (-3.4,-0.1) to [out=315,in=180] (-3,-0.3) to [out=0,in=225] (-2.6,-0.1); [blue] at (0,-0.5) R_+; [ultra thick, ->, blue, xshift=4cm] (-3.4,-0.1) to [out=315,in=180] (-3,-0.3) to [out=0,in=225] (-2.6,-0.1); [blue] at (1,-0.5) R_+; [ultra thick, ->, blue, xshift=5cm] (-3.4,-0.1) to [out=315,in=180] (-3,-0.3) to [out=0,in=225] (-2.6,-0.1); [blue] at (2,-0.5) R_+; [ultra thick, ->, blue, xshift=6cm] (-3.4,-0.1) to [out=315,in=180] (-3,-0.3) to [out=0,in=225] (-2.6,-0.1); [blue] at (3,-0.5) R_+; at (5.5,0) n odd;which also reflects the symmetry (<ref>). These graphical representations are reminiscent of the highest-weight representation Γ_n^2 of 𝔰𝔩(2,ℂ) where one can move between the various points (which represent certain one-dimensional functional spaces of theta-quotients) with the help of raising and lowering operators. In fact, we can make this connection more precise by writing𝒵_2,1^line(τ,t_f_1,m=nϵ,ϵ,-ϵ)=e^-n^2∑_λ=[c]∈ P^+_n^2(-1)^c ϕ^n_[c](τ,ϵ) 𝒪^n_λ(t_f_1)where e^-n^2=Q_f_1^n^2/2 and ξ=α_1/2=t_f_1/2 can be identified with the Weyl vector of 𝔞_1 following the identification (<ref>). Furthermore, the sum is over all elements of the fundamental Weyl chamber of the representation Γ_n^2 which are labelled by their weights λ=[c], i.e.P^+_n^2={[ {[2c]|c=0,…,n^2/2}={[0],[2],[4],…,[n^2]}… n even ,; {[2c+1]|c=0,…,(n^2-1)/2}={[1],[3],[5],…,[n^2]}…n odd . ].while we have for the coefficientsϕ^n_[k]=c^(n)_(n^2-k)/2 ,for[k]∈ P^+_n^2 .Finally, the 𝒪_λ^n in (<ref>) can be understood as the (normalised) orbits ofλ∈ P^+_n^2 under the Weyl group 𝒲(𝔞_1)≅ℤ_2 of 𝔞_1, i.e.𝒪_λ=[c]^n(t_f_1)=d_λ ∑_w∈𝒲(𝔞_1)(-1)^nℓ(w) e^w(λ)=d_λ(Q_f_1^n^2-c/2+(-1)^nQ_f_1^n^2+c/2) ,where we have used the identification (<ref>) and the normalisation factor is given byd_λ=[c]=|Orb_λ(𝒲(𝔞_1))|/|𝒲(𝔞_1)|={[ 12…c=0 ,;1… else . ].Here |Orb_λ(𝒲(𝔞_1))| is the order of the orbit of λ under the Weyl group of 𝔞_1 and |𝒲(𝔞_1)|=|ℤ_2|=2.To summarise, we propose that 𝒵_2,1^line(τ,t_f_1,m=nϵ,ϵ,-ϵ) can be written as a sum over weights of 𝔰𝔩(2,ℂ), whose representatives fall into the fundamental Weyl chamber of the irreducible representation Γ_n^2. As we shall see in the following, this pattern continues to hold for the partition functions of other non-compact M-brane configurations (N,1) for N>2. §.§ Configuration (N,M)=(3,1)§.§.§ Case ϵ_1=-ϵ_2=ϵ and m=ϵThe case (N,M)=(3,1) for the choice m=ϵ is analysed in detail in appendix <ref>. Summarising the results, as above only finitely many partitions contribute to 𝒵^line_N,1 in (<ref>) which are given in the following tableν_1 ν_2 (∏_a=1^2(-Q_f_a)^|ν_a|) ∏_a=1^2∏_(i,j)∈ν_aθ_1(τ;z^a_ij) θ_1(τ;v^a_ij)/θ_1(τ;w^a_ij)θ_1(τ;u^a_ij) ∅ ∅ 10.3cm1 ∅ -Q_f_1∅ 0.3cm1 -Q_f_20.6cm2 0.3cm1 Q_f_1^2Q_f_20.3cm1 0.3cm1,1 Q_f_1Q_f_2^20.6cm2 0.3cm1,1 -Q_f_1^2Q_f_2^2 Combining these expressions, we find for the partition function𝒵^line_3,1(τ,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ) =1-Q_f_1-Q_f_2+Q_f_1^2Q_f_2+Q_f_1Q_f_2^2-Q_f_1^2Q_f_2^2=(1-Q_f_1)(1-Q_f_2)(1-Q_f_1Q_f_2) .Notice that this result is independent of τ and ϵ and only depends on Q_f_1,2 in a polynomial fashion. Moreover, the partition function (<ref>) can be rewritten in a fashion that makes an 𝔞_2 symmetry manifest. Indeed, upon definingQ_f_1=e^-α_1 ,Q_f_2=e^-α_2 ,withα_1,2∈Δ_+(𝔞_2) ,where Δ_+(𝔞_2) denotes the simple positive roots of 𝔞_2, we can write𝒵_3,1^line(τ,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ)=∏_α∈Δ_+(𝔞_2)(1-e^-α)^mult(α) .Here we have used the fact that mult(α_1)=mult(α_2)=1. Using the Weyl character formula, we can rewrite the product (<ref>) as an orbit of the Weyl group𝒵_3,1^line(τ,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ)=∑_w∈𝒲(𝔞_2)(-1)^ℓ(w) e^w( )- ,where =α_1+α_2 is the Weyl vector and ℓ(w) is the length of w∈𝒲(𝔞_2), i.e. the number of Weyl reflections that w is decomposed of: the Weyl reflections of 𝔞_2 are defined as s_i:γ⟶ s_i(γ)=γ-⟨γ,α_i^∨⟩ α_i for i=1,2, where α_i^∨ are the co-roots associated with α_1,2, i.e. α_i^∨=2α_i(α_i,α_i). They are subject to the relations s_1^2=s_2^2=(s_1s_2)^3=0. With this notation we can check (<ref>) by working out all non-equivalent Weyl reflections w∈𝒲(𝔞_2) w( )- ℓ(w) 1 0 0 s_1 -α_1 1 s_2 -α_2 1 s_1s_2 -2α_1-α_2 2 s_2s_1 -α_1-2α_2 2 s_1s_2s_1 -2α_1-2α_2 3 Therefore, (using (<ref>)), we have 𝒵_3,1^line(τ,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ)=∑_w∈𝒲(𝔞_2)(-1)^ℓ(w) e^w( )-=1-Q_f_1-Q_f_2+Q_f_1^2Q_f_2+Q_f_1Q_f_2^2-Q_f_1^2Q_f_2^2 ,which indeed matches (<ref>). Thus the partition function 𝒵_3,1^line(τ,m=ϵ,t_f_1,t_f_2,ϵ,-ϵ) can be written in the form of a single Weyl-orbit of 𝒲(𝔞_2).In view of generalising (<ref>) to the cases m=nϵ for n>1, we prefer to write the action of the Weyl group 𝒲(𝔞_2)≅ S_3 in a slightly different and more intuitive manner. To this end we introduce the simple weights (L_1,L_2,L_3) that span the dual of the Cartan subalgebra 𝔥^*_𝔞_2 (as explained in appendix <ref>) and identifyt_f_a=L_a-L_a+1∀ a=1,2 ,which is compatible with (<ref>). Furthermore, we introducex_r:=e^L_r ,∀ r=1,2,3 ,such thatQ_f_1=x_2/x_1 ,andQ_f_2=x_3/x_2 .We note that the x_r=1,2,3 are not independent, but satisfy x_1x_2x_3=1 due to the constraint L_1+L_2+L_3=0 (see (<ref>)). Using the latter condition, we can write(<ref>) in the following fashion𝒵_3,1^line(τ,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ) =x_2x_3^2(x_1^2 x_2-x_1^2 x_3-x_1 x_2^2+x_1 x_3^2+x_2^2 x_3-x_2 x_3^2) =e^-∑_σ∈ S_3sign(σ) x_σ(1)^2 x_σ(2)^1 x_σ(3)^0 ,where e^-=e^L_3-L_1=Q_f_1Q_f_2=x_2x_3^2 (and e^=e^2L_1+L_2=x_1^2x_2). The action of the Weyl group in (<ref>) can also be illustrated graphically by arranging all terms in the following weight diagram:11.2cm (-2,0) – (-1,1.732) – (1,1.732) – (2,0) – (1,-1.732) – (-1,-1.732) – (-2,0);[fill=black] (1,-1.732) circle (0.15); [fill=black] (-1,-1.732) circle (0.15); [fill=black] (2,0) circle (0.15); [fill=black] (1,1.732) circle (0.15); [fill=black] (-1,1.732) circle (0.15); [fill=black] (-2,0) circle (0.15); (-2,0) – (2,0);(-1,1.732) – (1,-1.732);(1,1.732) – (-1,-1.732);at (3.1,0) e^=x_1^2x_2; [blue] at (2.9,-0.4) +1;at (-3.7,0) e^ Q_f_1^2Q_f_2^2 =x_2x_3^2; [blue] at (-3.5,-0.4) -1;at (2.35,2) e^ Q_f_1=x_1x_2^2; [blue] at (2.4,1.6) -1;at (-2.7,2.1) e^ Q_f_1^2 Q_f_2=x_2^2 x_3; [blue] at (-2.6,1.7) +1;at (2.35,-2) e^ Q_f_2= x_1^2 x_3; [blue] at (2.4,-2.45) -1;at (-2.7,-2) e^ Q_f_1Q_f_2^2=x_1x_3^2; [blue] at (-2.5,-2.45) +1;[red] (0,0) circle (0.15);[dashed] (0,0) – (0,2.3);at (0,1) ×;at (0.3,1.2) L_2; [dashed] (0,0) – (2.165,-1.25);at (0.866,-0.5) +;at (0.9,-0.85) L_1; [dashed] (0,0) – (-2.165,-1.25);at (-0.866,-0.5) +;at (-0.9,-0.85) L_3;where the blue numbers represent the factor sign(σ) in (<ref>). This picture indeed illustrates the S_3≅𝒲(𝔞_2) symmetry inherent in 𝒵_3,1^line(τ,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ). Finally, before continuing with further examples with m=nϵ for n>1 there are two comments we would like to make * The prefactor e^- in (<ref>) simply serves to arrange the various terms in the expansion of 𝒵_3,1^line(τ,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ) to be concentric with respect to the origin of the weight lattice spanned by (L_1,L_2,L_3). * We can also add the 'central point' e^- (marked by a red circle in the above figure) to the partition function 𝒵_3,1^line(τ,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ) in (<ref>) sincee^-∑_σ∈ S_3sign(σ) x_σ(1)^0 x_σ(2)^0 x_σ(3)^0=e^-(1-1-1+1+1-1)=0 .Therefore, we can write the partition function in the more suggestive form𝒵_3,1^line(τ,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ)=e^-∑_λ∈ P^+_1,1∑_w∈𝒲(𝔞_2)(-1)^ℓ(w) e^w(λ)where P^+_1,1 is the fundamental Weyl chamber of the irreducible representation Γ_1,1 of 𝔞_2P^+_1,1={0,}={0,2L_1+L_2} .As we shall discuss in the following, the form (<ref>) can be generalised to the cases m=nϵ for n>1. §.§.§ Case ϵ_1=-ϵ_2 and m=2ϵGeneralising the discussion of the previous subsection to the case ϵ_1=-ϵ_2=ϵ and m=2ϵ we find again specific conditions for the partitions ν_1,2 in (<ref>) to yield a non-vanishing contribution to the partition function 𝒵^line_3,1(τ,t_f_1,t_f_2,m=2ϵ,ϵ,-ϵ). As a consequence, the latter is again polynomial in Q_f_1 and Q_f_2 with highest powers Q_f_1^8 and Q_f_2^8 respectively. However, the coefficient of each term in this polynomial is no longer a constant (i.e. ± 1), but rather a quotient of Jacobi theta functions, i.e. schematically𝒵_3,1^line(τ,t_f_1,t_f_2,m=2ϵ,ϵ,-ϵ)= ∑_i,j=1^8 Q_f_1^iQ_f_2^j ∏_rθ(a_r(i,j) ϵ)/θ(b_r(i,j) ϵ) ,a_r, b_r ∈ℤ ,where the integers a_r(i,j) and b_r(i,j) implicitly depend on i,j. However, as we shall discuss presently, this expressions can still be written in a manner that makes the action of 𝔞_2 manifest. To this end, we group together all terms corresponding to a given quotient of theta functions, however, rather than using the variables Q_f_i, we use the variables x_r as introduced in (<ref>). In terms of the monomials Q_f_1^i Q_f_2^i we haveQ_f_1^iQ_f_2^j = e^-iL_1 + (i-j)L_2 + jL_3=x_1^-ix_2^i-jx_3^j ,for0≤ i,j≤ 2n^2 .The relation L_1+L_2+L_3=0 then implies x_1x_2x_3=1, which allows us to a generic monomial Q_f_1^i Q_f_2^j as a polynomial of x_1,2,3 with only positive powers. Specifically, for n=2 we find :𝒵_3,1^line(τ,t_f_1,t_f_2,m=2ϵ,ϵ,-ϵ)= x_2^4x_3^8{ ϕ^2_[4,4] (x_1^8x_2^4 + x_1^8x_3^4 + x_2^8x_3^4 + x_2^8x_1^4 + x_3^8x_1^4 + x_3^8x_2^4) +ϕ^2_[2,5] (x_1^7x_2^5+ x_1^7x_3^5 + x_2^7x_3^5 + x_2^7x_1^5 + x_3^7x_1^5 + x_3^7x_2^5) + ϕ^2_[5,2] (x_1^7x_2^2+ x_1^7x_3^2 + x_2^7x_3^2 + x_2^7x_1^2 + x_3^7x_1^2 + x_3^7x_2^2)+ ϕ^2_[0,6] (x_1^6x_2^6+ x_1^6x_3^6 + x_2^6x_3^6) + ϕ^2_[6,0] (x_1^6 + x_2^6 + x_3^6) + ϕ^2_[3,3] (x_1^6x_2^3 + x_1^6x_3^3 + x_2^6x_3^3 + x_2^6x_1^3 + x_3^6x_1^3 + x_3^6x_2^3) + ϕ^2_[1,4](x_1^5x_2^4 + x_1^5x_3^4 + x_2^5x_3^4 + x_2^5x_1^4 + x_3^5x_1^4 + x_3^5x_2^4)+ ϕ^2_[4,1] (x_1^5x_2 + x_1^5x_3 + x_1^5x_3 + x_2^5x_1 + x_3^5x_1 + x_3^5x_2) + ϕ^2_[2,2] (x_1^4x_2^2 + x_1^4x_3^2 + x_2^4x_3^2 + x_2^4x_1^2 + x_3^4x_1^2 + x_3^4x_2^2) + ϕ^2_[0,3] (x_1^3x_2^3+ x_1^3x_3^3 + x_2^3x_3^3) +ϕ^2_[3,0] (x_1^3 + x_2^3 + x_3^3) + ϕ^2_[1,1] (x_1^2x_2 + x_1^2x_3 + x_2^2x_3 + x_2^2x_1 + x_3^2x_1 + x_3^2x_2)+ ϕ^2_[0,0]} ,where the factors ϕ^2_[c_1,c_2](τ,ϵ) depend on τ and ϵ and are given as followsϕ^2_[4,4](τ,ϵ)=1 ,ϕ^2_[2,2](τ,ϵ)=-θ_1(τ;3ϵ)^2+θ_1(τ;ϵ)^2θ_1(τ;5ϵ)/θ_1(τ;ϵ)^3 ,ϕ^2_[5,2](τ,ϵ)=ϕ^2_[2,5](τ,ϵ)=θ_1(τ;2ϵ)^2/θ_1(τ;ϵ)^2 ,ϕ^2_[3,0](τ,ϵ)=ϕ^2_[0,3](τ,ϵ)=-2 θ_1(τ;2ϵ)θ_1(τ;3ϵ)θ_1(τ;4ϵ)/θ_1(τ;ϵ)^3 ,ϕ^2_[6,0](τ,ϵ)=ϕ^2_[0,6](τ,ϵ)=2 θ_1(τ;3ϵ)/θ_1(τ;ϵ) ,ϕ^2_[1,1](τ,ϵ)=-θ_1(τ;ϵ)θ_1(τ;4ϵ)^2+θ_1(τ;2ϵ)^2θ_1(τ;5ϵ)/θ_1(ϵ)^3 ,ϕ^2_[3,3](τ,ϵ)=θ_1(τ;2ϵ)^2θ_1(τ;3ϵ)/θ_1(τ;ϵ)^3 ,ϕ^2_[0,0](τ,ϵ)=6 θ_1(τ;3ϵ)θ_1(τ;5ϵ)/θ_1(τ;ϵ)^2 ,ϕ^2_[4,1](τ,ϵ)=ϕ^2_[1,4](τ,ϵ)=θ_1(τ;2ϵ)θ_1(τ;4ϵ)/θ_1(τ;ϵ)^2 .The subscripts[The superscript has been added as a reminder of the fact that we are dealing with the case n=2.] are chosen in such a way to make an action of the Weyl group 𝒲(𝔞_2)≅ S_3 of 𝔰𝔩(3,ℂ) on 𝒵_3,1^line(τ,t_f_1,t_f_2,m=2ϵ,ϵ,-ϵ) (along the lines of (<ref>) for n=1) visible. They can be identified with the Dynkin labels of the irreducible representation Γ_4,4, as we shall explain in the following: as in the case of n=1 (see eq. (<ref>)), the Weyl group 𝒲(𝔞_2)acts as a permutation of the powers of a given monomial of the x_1,2,3:s_σ(x_1^ix_2^jx_3^k)=x_σ(1)^ix_σ(2)^jx_σ(3)^k,forσ∈ S_3 ,which allows us to describe all monomials multiplying a given ϕ^2_[c_1,c_2] as the Weyl orbit of a single element. To describe the latter, we introduce the fundamental weights of 𝔞_2ω_1= L_1 ,andω_2=L_1 + L_2 ,which serve as a basis for the weight lattice of 𝔞_2 and span the fundamental Weyl chamber. Concretely, every weight vector can be written asλ= c_1 ω_1 + c_2 ω_2,for c_1,c_2 ∈ℤ .For example the Weyl vector is given by = ω_1 + ω_2. In order to illustrate the structure of the partition function 𝒵_3,1^line(τ,t_f_1,t_f_2,m=2ϵ,ϵ,-ϵ) graphically, we can represent each term in (<ref>) in the weight lattice of 𝔞_2 [scale=1.68][fill=black] (-2,3.464) circle (0.1); at (-2.35,3.65) ϕ^2_[4,4]; [fill=black] (-1,3.464) circle (0.1); at (-1,3.8) ϕ^2_[5,2]; [fill=black] (0,3.464) circle (0.1); at (0,3.8) ϕ^2_[6,0]; [fill=black] (1,3.464) circle (0.1); at (1,3.8) ϕ^2_[2,5]; [fill=black] (2,3.464) circle (0.1); at (2.35,3.65) ϕ^2_[4,4];[fill=black] (-2.5,2.598) circle (0.1); at (-2.85,2.8) ϕ^2_[2,5]; [fill=black] (-1.5,2.598) circle (0.1); at (-1.875,2.775) ϕ^2_[3,3]; [fill=black] (-0.5,2.598) circle (0.1); at (-0.9,2.775) ϕ^2_[4,1]; [fill=black] (0.5,2.598) circle (0.1); at (0.9,2.775) ϕ^2_[1,4]; [fill=black] (1.5,2.598) circle (0.1); at (1.875,2.775) ϕ^2_[3,3]; [fill=black] (2.5,2.598) circle (0.1); at (2.85,2.8) ϕ^2_[5,2];[fill=black] (-3,1.732) circle (0.1); at (-3.35,1.95) ϕ^2_[0,6]; [fill=black] (-2,1.732) circle (0.1); at (-2.4,1.9) ϕ^2_[1,4]; [fill=black] (-1,1.732) circle (0.1); at (-1.4,1.9) ϕ^2_[2,2]; [fill=black] (0,1.732) circle (0.1); at (0.4,1.9) ϕ^2_[3,0]; [fill=black] (1,1.732) circle (0.1); at (1.4,1.9) ϕ^2_[2,2]; [fill=black] (2,1.732) circle (0.1); at (2.4,1.9) ϕ^2_[4,1]; [fill=black] (3,1.732) circle (0.1); at (3.4,1.6) ϕ^2_[0,6];[fill=black] (-3.5,0.866) circle (0.1); at (-3.85,1.05) ϕ^2_[5,2]; [fill=black] (-2.5,0.866) circle (0.1); at (-2.9,1.02) ϕ^2_[4,1]; [fill=black] (-1.5,0.866) circle (0.1); at (-1.85,1.02) ϕ^2_[0,3]; [fill=black] (-0.5,0.866) circle (0.1); at (-0.9,1.02) ϕ^2_[1,1]; [fill=black] (0.5,0.866) circle (0.1); at (0.1,1.02) ϕ^2_[1,1]; [fill=black] (1.5,0.866) circle (0.1); at (1.175,1.02) ϕ^2_[0,3]; [fill=black] (2.5,0.866) circle (0.1); at (2.9,1.02) ϕ^2_[1,4]; [fill=black] (3.5,0.866) circle (0.1); at (3.85,1.05) ϕ^2_[2,5];[fill=black] (-4,0) circle (0.1); at (-4.4,0) ϕ^2_[4,4]; [fill=black] (-3,0) circle (0.1); at (-3.4,-0.19) ϕ^2_[3,3]; [fill=black] (-2,0) circle (0.1); at (-2.375,0.2) ϕ^2_[2,2]; [fill=black] (-1,0) circle (0.1); at (-1.4,-0.19) ϕ^2_[1,1]; [fill=black] (0,0) circle (0.1); at (-0.35,0.175) ϕ^2_[0,0]; [fill=black] (1,0) circle (0.1); at (1.38,-0.175) ϕ^2_[1,1]; [fill=black] (2,0) circle (0.1); at (2.4,0.15) ϕ^2_[2,2]; [fill=black] (3,0) circle (0.1); at (3.425,-0.2) ϕ^2_[3,3]; [fill=black] (4,0) circle (0.1); at (4.4,0) ϕ^2_[4,4];[fill=black] (-3.5,-0.866) circle (0.1); at (-3.85,-1.05) ϕ^2_[2,5]; [fill=black] (-2.5,-0.866) circle (0.1); at (-2.9,-1.02) ϕ^2_[1,4]; [fill=black] (-1.5,-0.866) circle (0.1); at (-1.9,-1.02) ϕ^2_[3,0]; [fill=black] (-0.5,-0.866) circle (0.1); at (-0.9,-1.02) ϕ^2_[1,1]; [fill=black] (0.5,-0.866) circle (0.1); at (0.1,-1.02) ϕ^2_[1,1]; [fill=black] (1.5,-0.866) circle (0.1); at (1.15,-1.02) ϕ^2_[3,0]; [fill=black] (2.5,-0.866) circle (0.1); at (2.9,-1.02) ϕ^2_[4,1]; [fill=black] (3.5,-0.866) circle (0.1); at (3.85,-1.05) ϕ^2_[5,2];[fill=black] (-3,-1.732) circle (0.1); at (-3.35,-1.95) ϕ^2_[6,0]; [fill=black] (-2,-1.732) circle (0.1); at (-2.4,-1.9) ϕ^2_[4,1]; [fill=black] (-1,-1.732) circle (0.1); at (-1.4,-1.9) ϕ^2_[2,2]; [fill=black] (0,-1.732) circle (0.1); at (-0.4,-1.9) ϕ^2_[0,3]; [fill=black] (1,-1.732) circle (0.1); at (1.4,-1.9) ϕ^2_[2,2]; [fill=black] (2,-1.732) circle (0.1); at (2.4,-1.9) ϕ^2_(1,4]; [fill=black] (3,-1.732) circle (0.1); at (3.4,-1.6) ϕ^2_[6,0];[fill=black] (-2.5,-2.598) circle (0.1); at (-2.85,-2.8) ϕ^2_[5,2]; [fill=black] (-1.5,-2.598) circle (0.1); at (-1.9,-2.775) ϕ^2_[3,3]; [fill=black] (-0.5,-2.598) circle (0.1); at (-0.9,-2.775) ϕ^2_[1,4]; [fill=black] (0.5,-2.598) circle (0.1); at (0.9,-2.775) ϕ^2_[4,1]; [fill=black] (1.5,-2.598) circle (0.1); at (1.9,-2.775) ϕ^2_[3,3]; [fill=black] (2.5,-2.598) circle (0.1); at (2.85,-2.8) ϕ^2_[2,5];[fill=black] (-2,-3.464) circle (0.1); at (-2.35,-3.65) ϕ^2_[4,4]; [fill=black] (-1,-3.464) circle (0.1); at (-1,-3.8) ϕ^2_[2,5]; [fill=black] (0,-3.464) circle (0.1); at (0,-3.8) ϕ^2_[0,6]; [fill=black] (1,-3.464) circle (0.1); at (1,-3.8) ϕ^2_[5,2]; [fill=black] (2,-3.464) circle (0.1); at (2.35,-3.65) ϕ^2_[4,4];(-2,3.464) – (2,3.464); (-2.5,2.598) – (2.5,2.598); (-3,1.732) – (3,1.732); (-3.5,0.866) – (3.5,0.866); (-4,0) – (4,0); (-3.5,-0.866) – (3.5,-0.866); (-3,-1.732) – (3,-1.732); (-2.5,-2.598) – (2.5,-2.598); (-2,-3.464) – (2,-3.464);(-2,-3.464) – (-4,0); (-1,-3.464) – (-3.5,0.866); (0,-3.464) – (-3,1.732); (1,-3.464) – (-2.5,2.598); (2,-3.464) – (-2,3.464); (2.5,-2.598) – (-1,3.464); (3,-1.732) – (0,3.464); (3.5,-0.866) – (1,3.464); (4,0) – (2,3.464);(-4,0) – (-2,3.464); (-3.5,-0.866) – (-1,3.464); (-3,-1.732) – (0,3.464); (-2.5,-2.598) – (1,3.464); (-2,-3.464) – (2,3.464); (-1,-3.464) – (2.5,2.598); (0,-3.464) – (3,1.732); (1,-3.464) – (3.5,0.866); (2,-3.464) – (4,0); [dashed, red, ultra thick] (0,0) – (4.25,2.45); [dashed, red, ultra thick] (0,0) – (4.25,-2.45);[->, ultra thick] (0,0) – (0.5,0.288);at (0.5,0.45) ω_2;[->, ultra thick] (0,0) – (0.5,-0.288);at (0.5,-0.45) ω_1; [->,xshift=-4.7cm, yshift=3.5cm] (0,0) – (0.5,-0.288); at (-4,3.1) L_1; [->,xshift=-4.7cm, yshift=3.5cm] (0,0) – (-0.5,-0.288); at (-5.35,3.1) L_3; [->,xshift=-4.7cm, yshift=3.5cm] (0,0) – (0,0.577); at (-4.7,4.25) L_2;where we have also indicated the fundamental Weyl chamber (spanned by the fundamental weights ω_1,2) and attributed the factors ϕ^2_[c_1,c_2] accordingly. Comparing with the irreducible representations of 𝔰𝔩(3,ℂ) (see appendix <ref> for a review), we can write the partition function (<ref>) as a sum over the Weyl orbits of the 13 representatives in the fundamental Weyl chamber of the irreducible representation Γ_4,4. Concretely, we have𝒵_3,1^line(τ,t_f_1,t_f_2,m=2ϵ,ϵ,-ϵ)= e^-4∑_λ=[c_1,c_2] ∈ P^+_4,4(-1)^c_1+c_2 ϕ_[c_1,c_2]^2(τ,ϵ)𝒪^2_λ(t_f_1,t_f_2) ,where the individual (normalised) Weyl orbits are labelled by the Dynkin labels [c_1,c_2] and are given as𝒪^2_λ=[c_1,c_2](t_f_1,t_f_2)= d_λ∑_w ∈𝒲(𝔞_2)e^w(λ)=d_λ∑_σ∈ S_3 x_σ(1)^c_1+c_2 x_σ(2)^c_2 x_σ(3)^0 ,and the normalisation factor is given byd_λ=[c_1,c_2]=|Orb_λ(𝒲(𝔞_2))|/|𝒲(𝔞_2)|={[ 16…c_1=c_2=0; 12… [ c_1=0or c_2=0 and;[c_1,c_2]≠ [0,0] ];1… else ].where |Orb_λ(𝒲(𝔞_2))| is the order of the orbit of λ under the Weyl group of 𝔞_2 and |𝒲(𝔞_2)|=|S_3|=6. Finally the following weights of Γ_4,4 are in the fundamental Weyl chamber P^+_4,4={[0,0] ,[1,1] ,[3,0] ,[0,3] ,[2,2] ,[4,1] ,[1,4] ,[3,3] ,[6,0] ,[0,6] ,[5,2] ,[2,5] ,[4,4]} .For example we have explicitly[In order to make contact with the Q_f_1 and Q_f_2 we recall that upon using (<ref>), a given monomial in (<ref>) can be written in the form Q_f_1^iQ_f_2^j = e^(j-2i)ω_1 + (i-2j)ω_2.]e^-4 𝒪^2_[3,0](t_f_1,t_f_2)=1/2 x_2^4x_3^8(2x_1^3+2x_2^2+2x_3^3)=Q_f_1^4Q_f_2^4(Q_f_1^2Q_f_2+Q_f_1^-1Q_f_2^1+Q_f_1^-1Q_f_2^-2) .Here the factors 2 (which cancel d_[3,0]=12) are due to the fact that e.g. x_1^3x_2^0x_3^0=x_1^3x_3^0x_2^0, such that x_1^3 is invariant under two elements of S_3. Notice also |Orb_[3,0](𝒲(𝔞_2))|=3.Before further generalising this discussion to generic m=nϵ for n∈ℕ, there are a few comments we would like to make * Comparing (<ref>) to (<ref>), both are structurally very similar in the sense that they are sums over Weyl orbits whose representatives are in the fundamental Weyl chamber of a certain irreducible representation of 𝔰𝔩(3,ℂ). However, in the case of (<ref>), each orbit is still multiplied by a non-trivial function which depends on τ and ϵ. Another difference is the fact that the terms in each orbit in (<ref>) come with the same relativ sign due to the absence of (-1)^ℓ(w) which is present in (<ref>). * The arguments of the theta functions of the individual ϕ^2_[c_1,c_2] are related to the Dynkin labels [c_1,c_2]. Indeed, recall that the ϕ^2_[c_1,c_2] are quotients of Jacobi-theta functions, schematicallyϕ^2_[c_1,c_2](τ,ϵ) =∏_r θ_1(τ;a_rϵ)/θ_1(τ;b_rϵ)with a_r, b_r ∈ℕEach such quotient has a well-defined index ℐ_τ under the action of SL(2,ℤ)_τ (which was introduced in (<ref>)[Notice that SL(2,ℤ)_τ remains a symmetry of the partition function even after the identification ϵ_1=-ϵ_2=ϵ and m=2ϵ.])(τ,ϵ)⟶(aτ+b/cτ+d ,ϵ/cτ+d) ,with([ a b; c d ])∈ SL(2,ℤ)_τ .Specifically, ℐ_τ is given asℐ_τ(ϕ^2_λ=[c_1,c_2])=1/2∑_r (a_r^2 -b_r^2) ,which is related to the weight λ=[c_1,c_2] of ϕ^2_[c_1,c_2] throughℐ_τ(ϕ^2_λ=[c_1,c_2])=(4 ,4 ) - (λ , λ)=16 -1/3 (c_1^2 + c_1c_2 + c_2^2) .Here (.,.) stands for the inner product in the basis (ω_1,ω_2).§.§.§ Case ϵ_1=-ϵ_2 and m=nϵ for generic n∈ℕThe results of the previous two subsections show an emergent pattern which can be generalised directly and which we conjecture[We have indeed verified the results further up to n=6.] to hold for generic n∈ℕ: for n a (finite) integer, only a finite number of partitions ν_1,2 can contribute to the partition function (<ref>). Therefore 𝒵_3,1^line(τ,t_f_1,t_f_2,m=nϵ,ϵ,-ϵ) is polynomial in the parameters Q_f_1 and Q_f_2 with the highest powers Q_f_1^2n^2 and Q_f_2^2n^2. Each monomial Q_f_1^i Q_f_2^j is multiplied by a quotient of Jacobi-theta functions that depend on τ and ϵ. Specifically, we can write in a similar fashion as in (<ref>) 𝒵_3,1^line(τ,m=nϵ,t_f_1,t_f_2,ϵ,-ϵ)= ∑_i,j=1^2n^2 Q_f_1^iQ_f_2^j ∏_rθ(a_r(i,j) ϵ)/θ(b_r(i,j) ϵ) ,a_r, b_r ∈ℤ ,Using the same notation as in the previous subsection, we propose that we can re-write the partition function in the following manner𝒵_3,1^line(τ,t_f_1,t_f_2,m=nϵ,ϵ,-ϵ)= e^-n^2∑_λ=[c_1,c_2] ∈ P^+_n^2,n^2 (-1)^c_1+c_2 ϕ_[c_1,c_2]^n(τ,ϵ)𝒪^n_λ(t_f_1,t_f_2) .Here 𝒪^n_λ(t_f_1,t_f_2) denotes the following normalised orbits of the Weyl group 𝒲(𝔞_2)≅ S_3 (with d_λ defined in (<ref>))𝒪^n_λ=[c_1,c_2](t_f_1,t_f_2)=d_λ ∑_w ∈𝒲(𝔞_2)(-1)^n ℓ(w) e^w(λ)=d_λ ∑_σ∈ S_3(sign(σ))^n x_σ(1)^c_1+c_2 x_σ(2)^c_2 x_σ(3)^0 ,while the representatives λ fit into the irreducible representation Γ_n^2,n^2 of 𝔰𝔩(3,ℂ) (see appendix <ref> for further information and notation) and are chosen from the fundamental Weyl chamber, i.e.P^+_n^2,n^2 ={[r-s,r+2s]| r=0,…, n^2ands=0,…,min(r,n^2-r)}∪{[r+2s,r-s]| r=1,…, n^2ands=1,…,min(r,n^2-r)}={[0,0] , [1,1] , [3,0] , [0,3] , [2,2] ,…,[n^2-2,n^2+1] , [n^2+1,n^2-1] , [n^2,n^2]} .Finally, the ϕ^n_[c_1,c_2](τ,ϵ) are quotients of theta functions and the first few of them are given explicitly in appendix <ref>. These expressions are compatible with (<ref>): notice in particular the appearance of the numerical overall factors 2 for the weights [k,0] and [0,k] (for k∈ℕ) or 6 for the weight [0,0], e.g. ϕ^n_[n^2-4,n^2-1]|_n=2=ϕ^2_[0,3]=-2 θ(1)θ(3)θ(4)/θ(1)^3orϕ^n_[n^2-4,n^2-4]|_n=2=ϕ^2_[0,0]=6 θ(1)θ(5)/θ(1)^2 ,which agree with (<ref>) and compensate the factor d_λ for the cases |Orb_λ(𝒲(𝔞_2))|<6, in order to avoid overcounting. Furthermore, just as in the case n=2 in (<ref>), the functions ϕ^n_[c_1,c_2] can be assigned an index under the SL(2,ℤ)_τ action defined in (<ref>) ℐ(ϕ^n_λ=[c_1,c_2])=(n^2, n^2)-(λ,λ)=n^4 - 1/3 (c_1^2 + c_1c_2 + c_2^2) . The structure of (<ref>) can be made more transparent by arranging all terms on the weight lattice of 𝔞_2 as shown in figure <ref>. Here the red lines indicate the fundamental Weyl chamber and we have attached the coefficients for each weight respectively. In this way the symmetry under the Weyl group is made manifest.We notice, however, that for n odd, the weights [c_1,c_2] for c_1=0 or c_2=0 do not contribute to the partition function. Indeed, in these cases we have 𝒪^n_[c,0](t_f_1,t_f_2)=0=𝒪^n_[0,c](t_f_1,t_f_2) ,for[ c∈ℕ; n∈ℕ_odd ]due to the sign factors sign(σ) in the definition (<ref>).[The vanishing is due to the same mechanism which leads to (<ref>) for n=1.]In order to further elucidate the connection between 𝒵_3,1^line(τ,t_f_1,t_f_2,m=nϵ,ϵ,-ϵ) and the irreducible representation Γ_n^2,n^2 of 𝔞_2, we remark another property of the ϕ^n_λ(τ,ϵ) in (<ref>). As explained in appendix <ref>, the weight diagram of the representation Γ_n^2,n^2 is made from concentric hexagons whose weight spaces share the same multiplicity. Thus, one would expect that the quotients of the theta-functions ϕ^n_λ are elements of a vector space of functions whose dimension corresponds to the latter multiplicity. Concretely, we expectmultiplicity weights1 {[n^2-2r,n^2+r]|r=0,…,⌊n^22⌋}∪{[n^2+r,n^2-2r]|r=1,…,⌊n^22⌋} 2 14cm {[n^2-1-2r,n^2-1+r]|r=0,…,⌊n^2-12⌋}∪{[n^2-1+r,n^2-1-2r]|r=1,…,⌊n^2-12⌋}3 14cm {[n^2-2-2r,n^2-2+r]|r=0,…,⌊n^2-22⌋}∪{[n^2-2+r,n^2-2-2r]|r=1,…,⌊n^2-22⌋}⋮ ⋮ k 14cm {[n^2-(k-1)-2r,n^2-(k-1)+r]|r=0,…,⌊n^2-(k-1)2⌋}∪{[n^2-(k-1)+r,n^2-(k-1)-2r]|r=1,…,⌊n^2-(k-1)2⌋} Comparing with the explicit expressions (<ref>), we find that the functions ϕ^n_λ with weights λ=[c_1,c_2] that are expected to be of multiplicity k∈ℕ according to the above table, are indeed linear combinations of theta-quotients of the following type:𝒮_k={ θ(n-p)^a^(1)_pθ(n-p+1)^a^(1)_p-1…θ(n)^a^(1)_0θ(n+p-1)^a^(1)_p-1θ(n+p)^a^(1)_p/θ(1)^b^(1)_1θ(2)^b^(1)_2…θ(p-1)^b^(1)_p-1 ,θ(n+2-p))^a^(2)_2-pθ(n+4-p)^a^(2)_4-pθ(n+5-p)^a^(2)_5-p…θ(n+p-4)^a^(2)_p-4θ(n+p-3)^a^(2)_p-3/θ(1)^b^(2)_1θ(2)^b^(2)_2…θ(p-2)^b^(2)_p-2+θ(n+3-p))^a^(2)_p-3θ(n+4-p)^a^(2)_p-4…θ(n+p-5)^a^(2)_5-pθ(n+p-4)^a^(2)_4-pθ(n+p-2)^a^(2)_2-p/θ(1)^b^(2)_1θ(2)^b^(2)_2…θ(p-2)^b^(2)_p-2 ,θ(n+2-p)^a^(3)_2-pθ(n+5-p)^a^(3)_5-pθ(n+6-p)^a^(3)_6-p…θ(n+p-3)^a^(3)_p-3θ(n+p-4)^a^(3)_p-4/θ(1)^b^(3)_1θ(2)^b^(3)_2…θ(p-3)^b^(3)_p-3+θ(n+4-p)^a^(3)_p-4θ(n+3-p)^a^(3)_p-3…θ(n+p-6)^a^(3)_6-pθ(n+p-5)^a^(3)_5-pθ(n+p-2)^a^(3)_2-p/θ(1)^b^(3)_1θ(2)^b^(3)_2…θ(p-3)^b^(3)_p-3 ,,… ,θ(n+2-p)^a^(k)_2-pθ(n+2+k-p)^a^(k)_2+k-pθ(n+3+k-p)^a^(k)_3+k-p…θ(n+p-k-1)^a^(k)_p-k-1/θ(1)^b^(k)_1θ(2)^b^(k)_2…θ(p-k)^b^(k)_p-k+θ(n+k+1-p)^a^(k)_p-k-1…θ(n+p-k-3)^a^(k)_3+k-pθ(n+p-k-2)^a^(k)_2+k-pθ(n+p-2)^a^(k)_2-p/θ(1)^b^(k)_1θ(2)^b^(k)_2…θ(p-k)^b^(k)_p-k} ,where p=2n^2+1-c_1-c_2 and a^(r)_i,b^(r)_j∈ℕ. Thus, according to the grouping in (<ref>), the ϕ^n_λ are indeed elements of a space of functions 𝒮_k whose dimension k matches the expected mutliplicity. This is a further indication that the partition functions 𝒵_3,1^line(τ,t_f_1,t_f_2,m=nϵ,ϵ,-ϵ) can be arranged according to the irreducible representation Γ_n^2,n^2 of 𝔰𝔩(3,ℂ) for n∈ℕ.Finally, we would like to comment on the relation between 𝒵_3,1^line(τ,m=nϵ,t_f_1,t_f_2,ϵ,-ϵ) and 𝒵_2,1^line(τ,m=nϵ,t_f_1,ϵ,-ϵ) from the point of view of the representation theory of 𝔰𝔩(3,ℂ) and 𝔰𝔩(2,ℂ) respectively. Indeed, starting from the highest weight [n^2,n^2] of Γ_n^2,n^2 and acting with only a single root produces a highest weight representation of 𝔰𝔩(2,ℂ). Indeed, considering the functions{ϕ^n_[n^2,n^2],ϕ^n_[n^2+1,n^2-2],ϕ^n_[n^2+2,n^2-4],…,ϕ^n_[n^2+⌊n^22⌋,n^2-2⌊n^22⌋]}they exactly correspond to the c_r^(n) defined in (<ref>) – (<ref>), that appear in the expansion of 𝒵_2,1^line(τ,t_f_1,m=nϵ,ϵ,-ϵ) in (<ref>) and which we already argued in section <ref> follow the irreducible representations Γ_n^2 of 𝔰𝔩(2,ℂ). From a physical perspective, acting with only a single root on the highest weight [n^2,n^2] amounts to setting Q_f_2→ 0 and thus reducing the M-brane web configuration (N,M)=(3,1) to (2,1) by decoupling one of the M5-branes.§.§ Configurations (N,M)=(4,1) and (5,1)We can repeat the above analysis for (N,M)=(4,1) and (5,1). For simplicity, we restrict ourselves to the case m=ϵ In the former case, the partition function (<ref>) contains a sum over non-trivial partitions (ν_1,ν_2,ν_3) and the relevant contributions are given by𝒵^line_4,1(τ,t_f_1,t_f_2,t_f_3,m=ϵ,ϵ,-ϵ) =∑_ν_1,ν_2,ν_3f_(4,1)^(ν_1,ν_2,ν_3)(τ,t_f_1,t_f_2,t_f_3,ϵ)=∑_ν_1,ν_2,ν_3(∏_a=1^3(-Q_f_a)^|ν_a|) ∏_(i,j)∈ν_aθ_1(τ;z^a_ij) θ_1(τ;v^a_ij)/θ_1(τ;w^a_ij)θ_1(τ;u^a_ij)which are tabulated as followsν_1 ν_2 ν_3 f_(4,1)^(ν_1,ν_2,ν_3)∅ ∅ ∅1 0.3cm1 ∅ ∅ -Q_f_1 ∅ 0.3cm1 ∅ -Q_f_2 ∅ ∅0.3cm1 -Q_f_30.3cm1 ∅ 0.3cm1 Q_f_1Q_f_30.6cm2 0.3cm1 ∅ Q_f_1^2Q_f_2 0.3cm1 0.6cm2∅ Q_f_1Q_f_2^2 ∅ 0.6cm2 0.3cm1 Q_f_2^2Q_f_3 ∅ 0.3cm1 0.3cm1,1 Q_f_2Q_f_3^2∅ 0.6cm2 0.3cm1,1 -Q_f_2^2Q_f_3^2 0.6cm2 0.3cm1,1 ∅ -Q_f_1^2Q_f_2^20.6cm2 0.3cm1 0.3cm1,1 -Q_f_1^2Q_f_2Q_f_3^2 0.3cm1 0.6cm2,1 0.3cm1 -Q_f_1Q_f_2^3Q_f_3 ν_1 ν_2 ν_3 f_(4,1)^(ν_1,ν_2,ν_3) 0.3cm1 0.3cm1,1 0.3cm1,1,1 -Q_f_1^2Q_f_2^2Q_f_3^3 0.9cm3 0.6cm2 0.3cm1 -Q_f_1^3Q_f_2^2Q_f_30.3cm1 0.6cm2,1 0.3cm1,1,1 Q_f_1Q_f_2^3Q_f_3^3 0.6cm2 0.3cm1,1 0.3cm1,1,1 Q_f_1^2Q_f_2^2Q_f_3^3 0.9cm3 0.6cm2 0.3cm1,1 Q_f_1^3Q_f_2^2Q_f_3^2 0.9cm3 0.6cm2,1 0.3cm1 Q_f_1^3Q_f_2^3Q_f_30.6cm2 0.6cm2,2 0.3cm1,1 Q_f_1^2Q_f_2^4Q_f_3^20.6cm2 0.6cm2,2 0.3cm1,1,1 -Q_f_1^2Q_f_2^4Q_f_3^3 0.9cm3 0.6cm2,1 0.3cm1,1,1 -Q_f_1^3Q_f_2^3Q_f_3^3 0.9cm3 0.6cm2,2 0.3cm1,1 -Q_f_1^3Q_f_2^4Q_f_3^20.9cm3 0.6cm2,2 0.3cm1,1,1 Q_f_1^3Q_f_2^4Q_f_3^3 Combining all these contributions, we find for the partition function𝒵^line_4,1(τ,t_f_1,t_f_2,f_f_3, m=ϵ,ϵ,-ϵ)=(1-Q_f_1)(1-Q_f_2)(1-Q_f_3)(1-Q_f_1Q_f_2)(1-Q_f_2Q_f_3)(1-Q_f_1Q_f_2Q_f_3) ,which is polynomial in Q_f_1,2,3 and invariant under the exchange Q_f_1↔ Q_f_3. By making the following identifications with the simple roots of 𝔰𝔩(4,ℂ) Q_f_1=e^-α_1 ,Q_f_2=e^-α_2 ,Q_f_3=e^-α_3 ,we can write (<ref>) 𝒵^line_X_4,1(τ,m=ϵ,t_f_1,t_f_2,f_f_3,ϵ,-ϵ) = ∏_α∈Δ_+(𝔞_3) (1-e^-α)^mult(α) .As before, this can be rewritten, using the Weyl denominator formula, as a sum over the Weyl group for the corresponding root lattice𝒵^line_X_4,1(τ,m=ϵ,t_f_1,t_f_2,f_f_3,ϵ,-ϵ) = ∑_w ∈𝒲(𝔞_3) e^w(ξ) - ξwhere ξ=3/2α_1 + 2 α_2 + 3/2α_3 is the Weyl vector for 𝔰𝔩(4,ℂ). In a similar fashion as in the previous section we can give a graphical representation of the partition function by arranging its various terms on the weight lattice of 𝔞_3 (see figure <ref>). This presentation of the partition function indeed resembles a highest weight representation of 𝔞_3≅𝔰𝔩(4,ℂ). We have also performed checks for n>1: in all cases the partition function still has the structure of irreducible 𝔰𝔩(4,ℂ) representations. In the case (N,M)=(5,1), the partition function is a sum over four partitions (ν_1,ν_2,ν_3,ν_4). Analysing the individual contributions, we find that the partition function can be written as𝒵^line_5,1(τ, t_f_1,t_f_2,f_f_3,t_f_4,m=ϵ,ϵ,-ϵ)= (1-Q_f_1)(1-Q_f_2)(1-Q_f_3)(1-Q_f_4)(1-Q_f_1Q_f_2)× (1-Q_f_2Q_f_3)(1-Q_f_3Q_f_4)(1-Q_f_1Q_f_2Q_f_3)(1-Q_f_2Q_f_3Q_f_4)(1-Q_f_1Q_f_2Q_f_3Q_f_4) =∏_α∈Δ_+(𝔞_4)(1-e^-α).where we used Q_f_i=e^-α_i ,i=1,2,3,4As in the previous cases this can be rewritten as𝒵^line_5,1(τ,t_f_1,t_f_2,f_f_3,t_f_4,m=ϵ,ϵ,-ϵ)=∑_w ∈𝒲(𝔞_4) e^w(ξ) - ξwhere ξ=2α_1+3α_2+3α_3+2α_4 is the Weyl vector of 𝔞_4.Comparing (<ref>), (<ref>), (<ref>) and (<ref>) we conjecture the following pattern𝒵^line_N,1(τ,t_f_1,…,t_f_N-1,m=ϵ,ϵ,-ϵ)=∏_I=1^N-1[∏_a=1^I(1-∏_b=a^N-1-I+aQ_f_b)]= [∏_a=1^N-1(1-Q_f_a)] [∏_b=1^N-2(1-Q_f_bQ_f_b+1)]…[∏_c=1^2(1-Q_f_cQ_f_c+1… Q_f_c+N-3)](1-Q_f_1… Q_f_N-1) .which is independent of τ and ϵ. §.§ Configuration (N,M)=(2,2) After discussing examples of partition functions for non-compact configurations (N,M) with M=1, we can generalise the analysis to cases with M>1. We recall that the latter correspond to brane configurations with M5-branes probing a transverse ℤ_M orbifold background (i.e. an ALE_A_M-1-space). The simplest such configuration is (N,M)=(2,2), i.e. two M5-branes probing a transverse ALE_A_1 space.§.§.§ Choice ϵ_1=-ϵ_2=ϵ and m=ϵWe begin with the specific choice ϵ_1=-ϵ_2=m=ϵ for the deformation parameters. Analysing all integer partitions that may contribute to (<ref>) in this case, we find 𝒵^line_2,2(τ,T_1,t_f_1,m=ϵ, ϵ,-ϵ) = Q_f_1[Q_f_1^-1 ϕ^1,2_[2](τ,T_1,ϵ)-ϕ^1,2_[0](τ,T_1,ϵ) + Q_f_1 ϕ^1,2_[2](τ,T_1,ϵ)] ,where T̃_1 was defined in (<ref>) andϕ^n=1,M=2_[2](τ,T_1,ϵ) =1 ,andϕ^n=1,M=2_[0](τ,T_1,ϵ) =2 θ_1(τ; T̃_1-ϵ)θ_1(τ;T̃_1+ϵ)/θ_1(τ;T̃_1)^2 .Here we have added an additional superscript M in order to distinguish the coefficients from their counterparts with M=1 defined in (<ref>). Moreover, similar to the configuration (N,M)=(2,1), we can write (<ref>) as a sum over Weyl orbits of representatives in the fundamental Weyl chamber of the irreducible representation Γ_2 of 𝔰𝔩(2,ℂ). Indeed, similar to (<ref>), we can write𝒵_2,2^line(τ,T_1,t_f_1,m=ϵ,ϵ,-ϵ)=e^-2∑_λ=[c]∈ P^+_2(-1)^1-c/2ϕ^1,2_λ(τ,T_1,ϵ) 𝒪^1,2_λ(t_f_1) ,where e^-2=Q_f_1 and P^+_2 corresponds to the 𝔰𝔩(2,ℂ) weights in the fundamental Weyl chamber of Γ_2, i.e. P^+_2={[0] ,[2]}. Furthermore, we have the following definition of the Weyl orbits𝒪_λ=[c]^n=1,M=2(t_f_1)=d_λ ∑_w∈𝒲(𝔞_1) e^w(λ)=d_λ(Q_f_1^-c/2+Q_f_1^c/2) ,where d_λ was defined in (<ref>). Comparing 𝒪_λ=[c]^n=1,M=2(t_f_1) to its counterpart for M=1 and n=1 defined in(<ref>), we notice that there is no relative sign between the two factors due to the absence of the factor (-1)^ℓ(w). We can represent 𝒵_2,2^line(τ,t_f_1,m=ϵ,ϵ,-ϵ) schematically in the following weight diagram (-1,0) – (1,0);(-1,-0.2) – (-1,0.2); (0,-0.2) – (0,0.2); (1,-0.2) – (1,0.2);[red] at (-1,0) ∙; [red] at (0,0) ∙; [red] at (1,0) ∙; at (-1,0.5) ϕ^1,2_[2]; at (0,-0.6) ϕ^1,2_[0]; at (1,0.5) ϕ^1,2_[2];The coefficient functions ϕ^1,2_λ transform in a particular manner under modular transformations with respect to τ (generalising the action of SL(2,ℤ) to the case M>1 as in (<ref>)). Specifically, we haveϕ^1,2_[c](-1τ,T̃_1τ,ϵτ)=e^-2π i ϵ^2 ℐ_τ/τ ϕ^1,2_[c](τ,T̃_1,ϵ) ,where for λ=[c] we haveℐ_τ(ϕ^1,2_λ=[c])=(2, 2) - (λ,λ)= 1- 1/4 c^2 ,where (.,.) denotes the inner product in the fundamental weight basis {ω_1} of 𝔞_1. Generalising (<ref>) we call ℐ_τ the index of ϕ^1,2_λ under SL(2,ℤ)_τ. We point out in particular that the phase-factor (<ref>) is independent of T_1 and only depends on ϵ.Before generalising the above discussion to cases m=nϵ for n∈ℕ, we would like to make a further remark: in section <ref> we argued that the simplification of the partition function 𝒵_N,M=1 for m=nϵ and ϵ_1=-ϵ_2=ϵ is due to the fact that the Spin(8) holonomy charges are no longer linear independent over 𝒵. Therefore, there are possible cancellation among contributions with different charges with respect to U(1)_m and U(1)_ϵ_1× U(1)_ϵ_2. For M>1, the same simplifications take place in the untwisted sector of the orbifold action (<ref>), leading to similar simplifications of the partition function, as is indeed showcased in (<ref>). However, along the same line of reasoning, identifying T̃_1=kϵ for k∈ℕ, should lead to further cancellations among different contributions in the partition function. Indeed, setting T̃_1=ϵ in (<ref>) we get ϕ^1,2_[0](τ,ϵ,ϵ)=0, such that𝒵^line_2,2(τ,T̃_1=ϵ,t_f_1, ϵ,-ϵ)= 1+ Q_f_1^2 .This choice of parameters is still compatible with the SL(2,ℤ)_τ transformation (<ref>).§.§.§ Choice ϵ_1=-ϵ_2=ϵ and m=n ϵ for n>1Generalising the discussion of the previous subsubsection for m=nϵ with n>1 the partition function can schematically be written in the following form:[We have checked this expression explicitly up to n=10 and conjecture that it holds in general.]𝒵^line_2,2(τ,T_1,t_f_1,m=nϵ,ϵ,-ϵ) =∑_i=1^2n^2Q_f_1^i ∏_rθ_1(τ;a_1,rϵ)θ_1(τ;a_2,rϵ + T̃_1)/θ_1(τ;b_1,rϵ)θ_1(τ;b_2,rϵ + T̃_1) .Analogously to the previous cases we propose that the partition function (<ref>) can be written by summing the Weyl orbits for the weights in the fudamental Weyl chamber P_2n^2^+ of the irreducible representation Γ_2n^2 of 𝔰𝔩(2,ℂ)𝒵_2,2^line(τ,T_1,t_f_1,m=nϵ,ϵ,-ϵ)= e^-2n^2∑_λ∈ P^+_2n^2,2n^2 (-1)^2n^2-c/2 ϕ^n,M=2_[c_1](τ,T_1,ϵ) 𝒪_λ^n,M=2(t_f_1)where e^-2n^2=Q_f_1^n^2, P^+_2n^2={[2k]|k=0,…,n^2} and the Weyl orbits 𝒪_λ^n,M=2(t_f_1) are defined as𝒪_λ=[c]^n=1,M=2(t_f_1)=d_λ ∑_w∈𝒲(𝔞_1) (-1)^Mnl(w) e^w(λ)=d_λ(Q_f_1^-c/2+Q_f_1^c/2) ,which is equivalent to (<ref>) since (-1)^Mnl(w)=1 for n∈ℤ. Furthermore, the first few coefficient functions ϕ^n,M=2_[c_1](τ,T_1,ϵ) are given by (for λ=[c] with c≥ 0)ϕ^n,2_[2n^2]=1 ,ϕ^n,2_[2(n^2-1)]=2θ(n)^2θ(T̃_1-n)θ(T̃_1+n)/θ(1)^2θ(T̃_1)^2 , ϕ^n,2_[2(n^2-2)]=θ(n)^4θ(T̃_1-n)^2θ(n+T̃_1)^2/θ(1)^4θ(T̃_1-1)^2θ(T̃_1+1)^2+2θ(n-1)θ(n)^2θ(n+1)θ(T̃_1-n-1)θ(T̃_1-n)θ(T̃_1+n-1)θ(T̃_1+n)/θ(1)^2θ(2)^2θ(T̃_1)^2θ(T̃_1+1)^2+2θ(n-1)θ(n)^2θ(n+1)θ(T̃_1-n)θ(T̃_1-n+1)θ(T̃_1+n)θ(T̃_̃1̃+n+1)/θ(1)^2θ(2)^2θ(T̃_1)^2θ(T̃_1+1)^2Generalising (<ref>) and using the notation (<ref>), the index of the theta ratios is ℐ_τ(ϕ^n,2_[c])= ∑_r (a_1,r^2 + a_2,r^2 -b_1,r^2 -b_2,r^2)=(2n^2 ,2n^2) - (λ, λ)=n^4-1/4 c^2 .Finally, as for the case n=1, there are additional cancellations in the partition function once we set T̃_1=k ϵ (with k∈ℕ) to be a(n integer) multiple of ϵ. Notice, however, when k<n the partition function 𝒵_2,2^line(τ,T_1=kϵ,t_f_1,m=nϵ,ϵ,-ϵ) appears to diverge due to the fact that theta-functions in the denominator vanish. The choice k=n provides the simplest expression in the sense that certain ϕ^n,M=2_λ vanish. Schematically, the vanishing coefficient functions can be shown in the following weight diagram of 𝔰𝔩(2,ℂ):[<-] (-8,0) – (-6,0); [dashed] (-6,0) – (-4,0); (-4,0) – (-3,0); [dashed] (-3,0) – (-1,0); (-1,0) – (1,0); [dashed] (1,0) – (3,0); (3,0) – (4,0); [dashed] (4,0) – (6,0); [->] (6,0) – (8,0); (-7,-0.2) – (-7,0.2); (-6,-0.2) – (-6,0.2); [dashed](-5,-0.2) – (-5,0.2); (-4,-0.2) – (-4,0.2); (-3,-0.2) – (-3,0.2); [dashed] (-2,-0.2) – (-2,0.2); (-1,-0.2) – (-1,0.2); [thick,black] (0,-0.2) – (0,0.2); (1,-0.2) – (1,0.2); [dashed] (2,-0.2) – (2,0.2); (3,-0.2) – (3,0.2); (4,-0.2) – (4,0.2); [dashed] (5,-0.2) – (5,0.2); (6,-0.2) – (6,0.2); (7,-0.2) – (7,0.2); [red] at (-7,0) ∙; [red] at (-3,0) ∙; [red] at (-2,0) ∙; [red] at (-1,0) ∙; [red] (0,0) circle (0.1); [red] at (1,0) ∙; [red] at (2,0) ∙; [red] at (3,0) ∙; [red] at (7,0) ∙; at (-7,0.5) ϕ^n,2_[2n^2]; [rotate=270] at (-6,-1.4) ϕ^n,2_[2(n^2-1)]→ 0; [rotate=270] at (-4,-1.6) ϕ^n,2_[2(n^2-n+1)]→ 0; [rotate=90] at (-3,1.1) ϕ^n,2_[2(n^2-n)]; at (-1,0.5) ϕ^n,2_[2]; at (0,-0.5) ϕ^n,2_[0]; [->] (0,-0.9) – (1,-2.2); at (1,-2.5) =0; [rotate=308,scale=1.2] at (0.65,-1.45) for n odd; [->] (0,-0.9) – (-1,-2.2); [rotate=52,scale=1.2] at (-0.65,-1.45) for n even; at (-1,-2.5) ≠0; at (1,0.5) ϕ^n,2_[2]; [rotate=90] at (3,1.1) ϕ^n,2_[2(n^2-n)]; [rotate=270] at (4,-1.6) ϕ^n,2_[2(n^2-n+1)]→ 0; [rotate=270] at (6,-1.4) ϕ^n,2_[2(n^2-1)]→ 0; at (7,0.5) ϕ^n,2_[2n^2];The vanishing theta-quotients correspond to the following powers of Q_f_1 in the partition function: Q_f_1,Q_f_1^2,…,Q_f_1^n,Q_f_1^2n^2-n,Q_f_1^2n^2-n+1,…,Q_f_1^2n^2-1, while for odd n also the power Q_f_1^n^2 is vanishing as well.§.§ Configuration (N,M)=(3,2)We can analyse the configuration (N,M)=(3,2) in a similar fashion. The latter corresponds to a brane web with 3 M5-branes probing a transverse ALE_A_1 space. §.§.§ Choice ϵ_1=-ϵ_2=ϵ and m=ϵWe again begin with the case m=ϵ. In order to write the partition function, we use the same notation as in section <ref>. In particular we use the variables x_1,2,3 as defined in (<ref>) to write𝒵_3,2^line(τ,T_1,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ)= x_2^2x_3^4 { ϕ^1,2_[2,2](τ,T_1,ϵ) (x_1^4x_2^2 + x_1^4x_3^2 + x_2^4x_3^2 + x_2^4x_1^2 + x_3^4x_1^2 + x_3^4x_2^2) +ϕ^1,2_[3,0](τ,T_1,ϵ)(x_1^3+x_2^3+x_3^3)+ ϕ_[0,3]^1,2(τ,T_1,ϵ) (x_1^3x_2^3+x_2^3x_3^3+x_1^3x_3^3) + ϕ^1,2_[1,1](τ,T_1,ϵ) ( x_1^2x_2 + x_1^2x_3 + x_2^2x_3 + x_2^2x_1 + x_3^2x_1 + x_3^2x_2)+ ϕ^1,2_[0,0]}where x_2^2x_3^4=e^-2=Q_f_1^2 Q_f_2^2 withthe Weyl vector of 𝔞_2 and the ϕ^1,2_[c_1,c_2](τ,T_1,ϵ) are defined as followsϕ^1,2_[2,2]=1 , ϕ^1,2_[1,1]=θ_1(T̃_1+2ϵ)θ_1(T̃_1-ϵ)^2+θ_1(T̃_1-2ϵ)θ_1(T̃_1+ϵ)^2/θ_1(T̃_1)^3ϕ^1,2_[0,0]=-6θ_1(T̃_1-2ϵ)θ_1(T̃_1+2ϵ)/θ_1(T̃_̃1̃)^2 , ϕ^1,2_[3,0]=ϕ^1,2_[0,3]=-2θ_1(T̃_1-ϵ)θ_1(T̃_1+ϵ)/θ_1(T̃_̃1̃)^2 .As in section <ref>, the polynomials in x_1,2,3 in (<ref>) resemble orbits of the Weyl action 𝒲(𝔞_2)≅ S_3 (and the subscripts in (<ref>) correspond to weights of 𝔞_2). More precisely, 𝒵_3,2^line(τ,T_1,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ) can be expressed as a sum over the Weyl orbits of the weights in the fundamental Weyl chamber P_2,2^+ of the irreducible representation Γ_2,2 of 𝔰𝔩(3,ℂ)𝒵_3,2^line(τ,T_1, t_f_1,t_f_2, m=ϵ, ϵ, -ϵ)= e^-2∑_λ=[c_1,c_2] ∈ P^+_2,2ϕ^1,2_[c_1,c_2](τ,T_1,ϵ) 𝒪_λ^1(t_f_1,t_f_2)where P^+_2,2={[0,0] ,[1,1] ,[0,3] ,[3,0] ,[2,2]} and the Weyl orbits are given by𝒪_λ^1(t_f_1,t_f_2)= d_λ ∑_w∈𝒲e^w(λ)with d_λ defined in (<ref>). As in section <ref>, the partition function 𝒵_3,2^line(τ,T_1, t_f_1,t_f_2, m=ϵ, ϵ, -ϵ) transforms well under SL(2,ℤ)_τ: following the transformation (<ref>) we have for example ϕ^1,2_[1,1](-1τ,ϵτ,T̃_1τ)=e^i π (-2ϵ+T̃_1)^2e^i π 2(ϵ+T̃_1)^2 e^-i π 3 T̃_1^2ϕ^1,2_[1,1](τ,ϵ,T̃_1)=e^2 i π 3ϵ^2ϕ^1,2_[1,1](τ,ϵ,T̃_1) .In general we can introduce the index ℐ_τ(ϕ^1,2_[c_1,c_2]) = (2, 2) - (λ , λ)= 4 - 1/3(c_1^2 + c_1c_2+c_2^2) .As in the previous section the partition function can be further simplified by setting T̃_1=2ϵ. We can represent the partition function by the following diagram: [scale=1.4][fill=black] (-1,1.732) circle (0.1); at (-1.2,2) ϕ_[2,2]; [red] (0,1.732) circle (0.1); at (0,2.05) ϕ_[3,0]; [fill=black] (1,1.732) circle (0.1); at (1.2,2) ϕ_[2,2];[red] (-1.5,0.866) circle (0.1); at (-1.8,1.1) ϕ_[0,3]; [fill=black] (-0.5,0.866) circle (0.1); at (-0.85,1.05) ϕ_[1,1]; [fill=black] (0.5,0.866) circle (0.1); at (0.95,1.05) ϕ_[1,1]; [red] (1.5,0.866) circle (0.1); at (1.9,0.8) ϕ_[0,3];[fill=black] (-2,0) circle (0.1); at (-2.45,0) ϕ_[2,2]; [fill=black] (-1,0) circle (0.1); at (-1.4,-0.15) ϕ_[1,1]; [fill=black] (0,0) circle (0.1); at (-0.4,0.2) ϕ_[0,0]; [fill=black] (1,0) circle (0.1); at (1.45,-0.15) ϕ_[1,1]; [fill=black] (2,0) circle (0.1); at (2.45,0) ϕ_[2,2];[red] (-1.5,-0.866) circle (0.1); at (-1.8,-1.1) ϕ_[3,0]; [fill=black] (-0.5,-0.866) circle (0.1); at (-0.9,-1.05) ϕ_[1,1]; [fill=black] (0.5,-0.866) circle (0.1); at (0.95,-1.05) ϕ_[1,1]; [red] (1.5,-0.866) circle (0.1); at (1.9,-0.8) ϕ_[3,0];[fill=black] (-1,-1.732) circle (0.1); at (-1.2,-2) ϕ_[2,2]; [red] (0,-1.732) circle (0.1); at (0,-2.05) ϕ_[0,3]; [fill=black] (1,-1.732) circle (0.1); at (1.2,-2) ϕ_[2,2];(-1,1.732) – (1,1.732); (-1.5,0.866) – (1.5,0.866); (-2,0) – (2,0); (-1.5,-0.866) – (1.5,-0.866); (-1,-1.732) – (1,-1.732);(-1,-1.732) – (-2,0); (0,-1.732) – (-1.5,0.866); (1,-1.732) – (-1,1.732); (1.5,-0.866) – (0,1.732); (1,1.732) – (2,0);(1,-1.732) – (2,0); (0,-1.732) – (1.5,0.866); (-1,-1.732) – (1,1.732); (-1.5,-0.866) – (0,1.732); (-1,1.732) – (-2,0);[dashed,red] (0,0) – (3,1.732); [dashed,red] (0,0) – (3,-1.732); [->, ultra thick] (0,0) – (0.5,0.288);at (0.5,0.45) ω_2;[->, ultra thick] (0,0) – (0.5,-0.288);at (0.5,-0.45) ω_1;The red circles correspond to the terms that are removed by the simplification.§.§.§ Choice ϵ_1=-ϵ_2=ϵ and m=nϵ for n>1 For n≥ 2 the number (and size) of all expressions grows very quickly. However, all terms can still be arranged according to irreducible representations of 𝔞_2, as e.g. is graphically shown below for n=2 [scale=1.4][fill=black] (-2,3.464) circle (0.1); [red] (-1.5,3.464) circle (0.1); [red] (-1,3.464) circle (0.1); [fill=black] (-0.5,3.464) circle (0.1); [fill=black] (0,3.464) circle (0.1); [fill=black] (0.5,3.464) circle (0.1); [red] (1,3.464) circle (0.1); [red] (1.5,3.464) circle (0.1); [fill=black] (2,3.464) circle (0.1);[red] (-2.25,3.031) circle (0.1); [fill=black] (-1.75,3.031) circle (0.1); [red] (-1.25,3.031) circle (0.1); [fill=black] (-0.75,3.031) circle (0.1); [fill=black] (-0.25,3.031) circle (0.1); [fill=black] (0.25,3.031) circle (0.1); [fill=black] (0.75,3.031) circle (0.1); [red] (1.25,3.031) circle (0.1); [fill=black] (1.75,3.031) circle (0.1); [red] (2.25,3.031) circle (0.1);[red] (-2.5,2.598) circle (0.1); [red] (-2,2.598) circle (0.1); [fill=black] (-1.5,2.598) circle (0.1); [fill=black] (-1,2.598) circle (0.1); [fill=black] (-0.5,2.598) circle (0.1); [fill=black] (0,2.598) circle (0.1); [fill=black] (0.5,2.598) circle (0.1); [fill=black] (1,2.598) circle (0.1); [fill=black] (1.5,2.598) circle (0.1); [red] (2,2.598) circle (0.1); [red] (2.5,2.598) circle (0.1);[fill=black] (-2.75,2.165) circle (0.1); [fill=black] (-2.25,2.165) circle (0.1); [fill=black] (-1.75,2.165) circle (0.1); [fill=black] (-1.25,2.165) circle (0.1); [fill=black] (-0.75,2.165) circle (0.1); [fill=black] (-0.25,2.165) circle (0.1); [fill=black] (0.25,2.165) circle (0.1); [fill=black] (0.75,2.165) circle (0.1); [fill=black] (1.25,2.165) circle (0.1); [fill=black] (1.75,2.165) circle (0.1); [fill=black] (2.25,2.165) circle (0.1); [fill=black] (2.75,2.165) circle (0.1);[fill=black] (-3,1.732) circle (0.1); [fill=black] (-2.5,1.732) circle (0.1); [fill=black] (-2,1.732) circle (0.1); [fill=black] (-1.5,1.732) circle (0.1); [fill=black] (-1,1.732) circle (0.1); [fill=black] (-0.5,1.732) circle (0.1); [fill=black] (0,1.732) circle (0.1); [fill=black] (0.5,1.732) circle (0.1); [fill=black] (1,1.732) circle (0.1); [fill=black] (1.5,1.732) circle (0.1); [fill=black] (2,1.732) circle (0.1); [fill=black] (2.5,1.732) circle (0.1); [fill=black] (3,1.732) circle (0.1); at (3.5,1.732) ϕ^2,2_[0,12];[fill=black] (-3.25,1.299) circle (0.1); [fill=black] (-2.75,1.299) circle (0.1); [fill=black] (-2.25,1.299) circle (0.1); [fill=black] (-1.75,1.299) circle (0.1); [fill=black] (-1.25,1.299) circle (0.1); [fill=black] (-0.75,1.299) circle (0.1); [fill=black] (-0.25,1.299) circle (0.1); [fill=black] (0.25,1.299) circle (0.1); [fill=black] (0.75,1.299) circle (0.1); [fill=black] (1.25,1.299) circle (0.1); [fill=black] (1.75,1.299) circle (0.1); [fill=black] (2.25,1.299) circle (0.1); [fill=black] (2.75,1.299) circle (0.1); [fill=black] (3.25,1.299) circle (0.1); at (3.75,1.299) ϕ^2,2_[2,11];[red] (-3.5,0.866) circle (0.1); [fill=black] (-3,0.866) circle (0.1); [fill=black] (-2.5,0.866) circle (0.1); [fill=black] (-2,0.866) circle (0.1); [fill=black] (-1.5,0.866) circle (0.1); [fill=black] (-1,0.866) circle (0.1); [fill=black] (-0.5,0.866) circle (0.1); [fill=black] (0,0.866) circle (0.1); [fill=black] (0.5,0.866) circle (0.1); [fill=black] (1,0.866) circle (0.1); [fill=black] (1.5,0.866) circle (0.1); [fill=black] (2,0.866) circle (0.1); [fill=black] (2.5,0.866) circle (0.1); [fill=black] (3,0.866) circle (0.1); [red] (3.5,0.866) circle (0.1); at (4,0.866) ϕ^2,2_[4,10];[red] (-3.75,0.433) circle (0.1); [red] (-3.25,0.433) circle (0.1); [fill=black] (-2.75,0.433) circle (0.1); [fill=black] (-2.25,0.433) circle (0.1); [fill=black] (-1.75,0.433) circle (0.1); [fill=black] (-1.25,0.433) circle (0.1); [fill=black] (-0.75,0.433) circle (0.1); [fill=black] (-0.25,0.433) circle (0.1); [fill=black] (0.25,0.433) circle (0.1); [fill=black] (0.75,0.433) circle (0.1); [fill=black] (1.25,0.433) circle (0.1); [fill=black] (1.75,0.433) circle (0.1); [fill=black] (2.25,0.433) circle (0.1); [fill=black] (2.75,0.433) circle (0.1); [red] (3.25,0.433) circle (0.1); [red] (3.75,0.433) circle (0.1); at (4.25,0.433) ϕ^2,2_[6,9];[fill=black] (-4,0) circle (0.1); [fill=black] (-3.5,0) circle (0.1); [fill=black] (-3,0) circle (0.1); [fill=black] (-2.5,0) circle (0.1); [fill=black] (-2,0) circle (0.1); [fill=black] (-1.5,0) circle (0.1); [fill=black] (-1,0) circle (0.1); [fill=black] (-0.5,0) circle (0.1); [fill=black] (0,0) circle (0.1); [fill=black] (0.5,0) circle (0.1); [fill=black] (1,0) circle (0.1); [fill=black] (1.5,0) circle (0.1); [fill=black] (2,0) circle (0.1); [fill=black] (2.5,0) circle (0.1); [fill=black] (3,0) circle (0.1); [fill=black] (3.5,0) circle (0.1); [fill=black] (4,0) circle (0.1); at (4.5,0) ϕ^2,2_[8,8];[red] (-3.75,-0.433) circle (0.1); [red] (-3.25,-0.433) circle (0.1); [fill=black] (-2.75,-0.433) circle (0.1); [fill=black] (-2.25,-0.433) circle (0.1); [fill=black] (-1.75,-0.433) circle (0.1); [fill=black] (-1.25,-0.433) circle (0.1); [fill=black] (-0.75,-0.433) circle (0.1); [fill=black] (-0.25,-0.433) circle (0.1); [fill=black] (0.25,-0.433) circle (0.1); [fill=black] (0.75,-0.433) circle (0.1); [fill=black] (1.25,-0.433) circle (0.1); [fill=black] (1.75,-0.433) circle (0.1); [fill=black] (2.25,-0.433) circle (0.1); [fill=black] (2.75,-0.433) circle (0.1); [red] (3.25,-0.433) circle (0.1); [red] (3.75,-0.433) circle (0.1); at (4.25,-0.433) ϕ^2,2_[9,6];[red] (-3.5,-0.866) circle (0.1); [fill=black] (-3,-0.866) circle (0.1); [fill=black] (-2.5,-0.866) circle (0.1); [fill=black] (-2,-0.866) circle (0.1); [fill=black] (-1.5,-0.866) circle (0.1); [fill=black] (-1,-0.866) circle (0.1); [fill=black] (-0.5,-0.866) circle (0.1); [fill=black] (0,-0.866) circle (0.1); [fill=black] (0.5,-0.866) circle (0.1); [fill=black] (1,-0.866) circle (0.1); [fill=black] (1.5,-0.866) circle (0.1); [fill=black] (2,-0.866) circle (0.1); [fill=black] (2.5,-0.866) circle (0.1); [fill=black] (3,-0.866) circle (0.1); [red] (3.5,-0.866) circle (0.1); at (4,-0.866) ϕ^2,2_[10,4];[fill=black] (-3.25,-1.299) circle (0.1); [fill=black] (-2.75,-1.299) circle (0.1); [fill=black] (-2.25,-1.299) circle (0.1); [fill=black] (-1.75,-1.299) circle (0.1); [fill=black] (-1.25,-1.299) circle (0.1); [fill=black] (-0.75,-1.299) circle (0.1); [fill=black] (-0.25,-1.299) circle (0.1); [fill=black] (0.25,-1.299) circle (0.1); [fill=black] (0.75,-1.299) circle (0.1); [fill=black] (1.25,-1.299) circle (0.1); [fill=black] (1.75,-1.299) circle (0.1); [fill=black] (2.25,-1.299) circle (0.1); [fill=black] (2.75,-1.299) circle (0.1); [fill=black] (3.25,-1.299) circle (0.1); at (3.75,-1.299) ϕ^2,2_[11,2];[fill=black] (-3,-1.732) circle (0.1); [fill=black] (-2.5,-1.732) circle (0.1); [fill=black] (-2,-1.732) circle (0.1); [fill=black] (-1.5,-1.732) circle (0.1); [fill=black] (-1,-1.732) circle (0.1); [fill=black] (-0.5,-1.732) circle (0.1); [fill=black] (0,-1.732) circle (0.1); [fill=black] (0.5,-1.732) circle (0.1); [fill=black] (1,-1.732) circle (0.1); [fill=black] (1.5,-1.732) circle (0.1); [fill=black] (2,-1.732) circle (0.1); [fill=black] (2.5,-1.732) circle (0.1); [fill=black] (3,-1.732) circle (0.1); at (3.5,-1.732) ϕ^2,2_[12,0];[fill=black] (-2.75,-2.165) circle (0.1); [fill=black] (-2.25,-2.165) circle (0.1); [fill=black] (-1.75,-2.165) circle (0.1); [fill=black] (-1.25,-2.165) circle (0.1); [fill=black] (-0.75,-2.165) circle (0.1); [fill=black] (-0.25,-2.165) circle (0.1); [fill=black] (0.25,-2.165) circle (0.1); [fill=black] (0.75,-2.165) circle (0.1); [fill=black] (1.25,-2.165) circle (0.1); [fill=black] (1.75,-2.165) circle (0.1); [fill=black] (2.25,-2.165) circle (0.1); [fill=black] (2.75,-2.165) circle (0.1);[red] (-2.5,-2.598) circle (0.1); [red] (-2,-2.598) circle (0.1); [fill=black] (-1.5,-2.598) circle (0.1); [fill=black] (-1,-2.598) circle (0.1); [fill=black] (-0.5,-2.598) circle (0.1); [fill=black] (0,-2.598) circle (0.1); [fill=black] (0.5,-2.598) circle (0.1); [fill=black] (1,-2.598) circle (0.1); [fill=black] (1.5,-2.598) circle (0.1); [red] (2,-2.598) circle (0.1); [red] (2.5,-2.598) circle (0.1);[red] (-2.25,-3.031) circle (0.1); [fill=black] (-1.75,-3.031) circle (0.1); [red] (-1.25,-3.031) circle (0.1); [fill=black] (-0.75,-3.031) circle (0.1); [fill=black] (-0.25,-3.031) circle (0.1); [fill=black] (0.25,-3.031) circle (0.1); [fill=black] (0.75,-3.031) circle (0.1); [red] (1.25,-3.031) circle (0.1); [fill=black] (1.75,-3.031) circle (0.1); [red] (2.25,-3.031) circle (0.1);[fill=black] (-2,-3.464) circle (0.1); [red] (-1.5,-3.464) circle (0.1); [red] (-1,-3.464) circle (0.1); [fill=black] (-0.5,-3.464) circle (0.1); [fill=black] (0,-3.464) circle (0.1); [fill=black] (0.5,-3.464) circle (0.1); [red] (1,-3.464) circle (0.1); [red] (1.5,-3.464) circle (0.1); [fill=black] (2,-3.464) circle (0.1);(-2,3.464) – (2,3.464); (-2.25,3.031) – (2.25,3.031); (-2.5,2.598) – (2.5,2.598); (-2.75,2.165) – (2.75,2.165); (-3,1.732) – (3,1.732); (-3.25,1.299) – (3.25,1.299); (-3.5,0.866) – (3.5,0.866); (-3.75,0.433) – (3.75,0.433); (-4,0) – (4,0); (-2,-3.464) – (2,-3.464); (-2.25,-3.031) – (2.25,-3.031); (-2.5,-2.598) – (2.5,-2.598); (-2.75,-2.165) – (2.75,-2.165); (-3,-1.732) – (3,-1.732); (-3.25,-1.299) – (3.25,-1.299); (-3.5,-0.866) – (3.5,-0.866); (-3.75,-0.433) – (3.75,-0.433);(2,3.464) – (4,0); (1.5,3.464) – (3.75,-0.433); (1,3.464) – (3.5,-0.866); (0.5,3.464) – (3.25,-1.299); (0,3.464) – (3,-1.732); (-0.5,3.464) – (2.75,-2.165); (-1,3.464) – (2.5,-2.598); (-1.5,3.464) – (2.25,-3.031); (-2,3.464) – (2,-3.464); (-2,-3.464) – (-4,0); (-1.5,-3.464) – (-3.75,0.433); (-1,-3.464) – (-3.5,0.866); (-0.5,-3.464) – (-3.25,1.299); (0,-3.464) – (-3,1.732); (0.5,-3.464) – (-2.75,2.165); (1,-3.464) – (-2.5,2.598); (1.5,-3.464) – (-2.25,3.031); (-2,3.464) – (-4,0); (-1.5,3.464) – (-3.75,-0.433); (-1,3.464) – (-3.5,-0.866); (-0.5,3.464) – (-3.25,-1.299); (-0,3.464) – (-3,-1.732); (0.5,3.464) – (-2.75,-2.165); (1,3.464) – (-2.5,-2.598); (1.5,3.464) – (-2.25,-3.031); (2,3.464) – (-2,-3.464); (2,-3.464) – (4,0); (1.5,-3.464) – (3.75,0.433); (1,-3.464) – (3.5,0.866); (0.5,-3.464) – (3.25,1.299); (0,-3.464) – (3,1.732); (-0.5,-3.464) – (2.75,2.165); (-1,-3.464) – (2.5,2.598); (-1.5,-3.464) – (2.25,3.031);[dashed,red] (0,0) – (4.5,2.598); [dashed,red] (0,0) – (4.5,-2.598);[->, ultra thick] (-5,2.5) – (-4.5,2.788);at (-4.5,2.95) ω_2;[->, ultra thick] (-5,2.5) – (-4.5,2.212);at (-4.5,2.05) ω_1;where some of the ϕ^2,2_[c_1,c_2] are given byϕ^2,2_[8,8](τ,T_1,ϵ)=1, ϕ^2,2_[6,9]=ϕ^2,2_[9,6]=-2θ_1(τ;4ϵ)^2θ_1(τ;4ϵ-T̃_1)θ_1(τ;4ϵ+T̃_1)/θ_1(τ;ϵ)^2θ_1(τ;T̃_1)^2ϕ^2,2_[4,10]=ϕ^2,2_[10,4]=θ_1(τ;4ϵ)^4θ_1(τ;4ϵ-T̃_1)^2θ_1(τ;4ϵ+T̃_1)^2/θ_1(τ;ϵ)^4θ_1(τ;ϵ-T̃_1)^2θ_1(τ;ϵ+T̃_1)^2 +θ_1(τ;3ϵ )θ_1(τ; 4ϵ)^2θ_1(τ;5ϵ )θ_1(τ;4ϵ-T̃_1 )θ_1(τ;5ϵ-T̃_1 )θ_1(τ;3ϵ+T̃_1 )θ_1(τ;4ϵ+ T̃_1 )/θ_1(τ; ϵ)θ_1(τ;2ϵ )^2θ_1(τ;2T̃_1 )^2θ_1(τ;ϵ-T̃_1 )^2 + θ_1(τ;3ϵ )θ_1(τ; 4ϵ)^2θ_1(τ;5ϵ )θ_1(τ;3ϵ-T̃_1 )θ_1(τ;4ϵ-T̃_1 )θ_1(τ;4ϵ+T̃_1 )θ_1(τ;5ϵ+ T̃_1 )/θ_1(τ; ϵ)θ_1(τ;2ϵ )^2θ_1(τ;2T̃_1 )^2θ_1(τ;ϵ+T̃_1 )^2Here again the red circles stand for the terms removed when setting T̃_1=4ϵ.[In general, cancellation of this type occur for generic n by setting T̃_1=2nϵ]Based on the above results, for generic n≥ 2, we propose that the partition function can then be expressed by summing the Weyl orbits for the weights in the fundamental Weyl chamber P^+_2n^2,2n^2 of the irreducible representation Γ_2n^2,2n^2𝒵_3,2^line(𝐓,𝐭,m=nϵ,ϵ,-ϵ)= e^-2n^2∑_λ∈ P^+_2n^2,2n^2ϕ^n,2_[c_1,c_2](τ,ϵ)𝒪_λ^n(t_f_1,t_f_2) ,with𝒪_λ^n(t_f_1,t_f_2)=d_λ∑_w ∈𝒲 e^w(λ) ,and the SL(2,ℤ)_τ indices areℐ_τ(ϕ^n,M_λ[c_1,c_2])=(2n^2 , 2 n^2) - (λ , λ)= 4n^4 - 1/3(c_1^2 + c_1c_2 + c_2^2) § EXAMPLES: COMPACT BRANE CONFIGURATIONAfter having discussed examples of partition functions of non-compact brane configurations for the particular choice m=nϵ (with n∈ℕ), we now consider compact brane configurations. The non-compact case can be recovered in the limit ∏_a=1^nQ_f_a=Q_ρ→ 0, as we shall discuss in the following.§.§ Configuration (N,M)=(2,1)§.§.§ Choice ϵ_1=-ϵ_2=ϵ and m=ϵWe start with the case of two M5-branes, in which case there are two different partitions contributing to (<ref>). To describe the configurations contributing, we introduce the following class of partitions 𝔪_n=(n,n-1,n-2,…, 1) ,1.6cm 6, 5, 4,3,2,1^n-boxes ,with length|𝔪_n|=∑_i=1^ni=n(n+1)/2 .We also use the notation 𝔪_0=∅. With this notation, we only get the following three types of contributions to the partition function (n∈ℕ) ν_1 ν_2 (∏_a=1^2(-Q_f_a)^|ν_a|) ∏_a=1^2∏_(i,j)∈ν_aθ_1(τ;z^a_ij) θ_1(τ;v^a_ij)/θ_1(τ;w^a_ij)θ_1(τ;u^a_ij) ∅ ∅1𝔪_n=1.6cm 6, 5, 4,3,2,1 𝔪_n-1=1.3cm 5, 4,3,2,1(-Q_f_1)^n(n+1)/2(-Q_f_2)^n(n-1)/2𝔪_n-1=1.3cm 5, 4,3,2,1 𝔪_n=1.6cm 6, 5, 4,3,2,1 (-Q_f_1)^n(n-1)/2(-Q_f_2)^n(n+1)/2 Thus, the normalised partition function (<ref>) is 𝒵_2,1(τ,m=ϵ,t_f_1,t_f_2,ϵ,-ϵ) =1+∑_n=1^∞(-1)^n^2[Q_f_1^n(n+1)/2Q_f_2^n(n-1)/2+Q_f_1^n(n-1)/2Q_f_2^n(n+1)/2] .This expression can also be written in the form𝒵_2,1(τ,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ) =∏_k=1^∞𝒵_2,1^(k)(τ,m=ϵ,t_f_1,t_f_2,ϵ,-ϵ)=∏_k=1^∞(1-Q_ρ^k) (1-Q_f_1Q_ρ^k-1) (1-Q_f_2Q_ρ^k-1) ,where Q_ρ=Q_f_1Q_f_2.[Notice the relation 𝒵^line_3,1(τ,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ)=𝒵_2,1^(1)(τ,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ) relating compact to the non-compact M-brane configurations.] Following the discussion of the non-compact examples, we would like to Identify the Kähler parameters t_f_1 and t_f_2 with the affine roots α_0 and α_1, which are introduced in appendix <ref>. This involves choosing which t_f_a contains the null root δ . The final answer does not depend on this choice as the exchange Q_f_1↔ Q_f_2 does not change the partition function. Here we choose the followingQ_f_1=e^-α_1 ,Q_f_2=Q_ρ/Q_f_1=e^α_1-δ=e^-α_0 ,Q_ρ=e^-δand using expression (<ref>) for the positive roots of 𝔞_1 we can write𝒵_2,1(τ,𝐭,m=ϵ,ϵ,-ϵ) =(∏_n=0^∞(1-Q_1Q_ρ^n))(∏_n=1^∞(1-Q_ρ^n/Q_1))(∏_k=1^∞(1-Q_ρ^k))=∏_α∈Δ_+(𝔞_1)(1-e^-α) .Using the affine Weyl denominator formula (<ref>) can be written ass a sum over elements of the affine Weyl group (with mult(α)=1 for α∈Δ(𝔞_1))𝒵_2,1(τ,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ)= ∑_w ∈W (-1)^l(w) e^w()-where =ω_0+ω_1=[1,1,0] is the affine Weyl vector. We recall the action of the affine Weyl group 𝒲(𝔞_1) as given in (<ref>)s_0[c_0,c_1,l]=[-c_0,c_1+2c_0,l-c_0] , ands_1[c_0,c_1,l]=[c_0+2c_1,-c_1,l] . We can work out the first few Weyl reflections to check (<ref>)w∈𝒲(𝔞_1) w()- ℓ(w) grade 1 0 0 0s_0 -α_0 1 -1s_1 -α_1 1 0 s_1s_0 -α_0-3α_1 2 -1s_0s_1 -3α_0-α_1 2 -3 s_1s_0s_1 -3α_0-6α_1 3 -3Therefore, using (<ref>), we have 𝒵_2,1(τ,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ) = ∑_w ∈W (-1)^l(w) e^w()-=1-Q_f_1-Q_f_2+Q_f_1^3Q_f_2+Q_f_1^3Q_f_2 - Q_f_1^6Q_f_2^3+ …which matches (<ref>). While written as a sum of Weyl reflections of , we can also interpret (<ref>) as a sum over Weyl orbits of weights in the fundamental domain P^+_1,1 of the highest weight representation Γ_1,1 of 𝔞_1[This directly generalises the discussion of section <ref> to compact M5-brane configurations.]:following the discussion of appendix <ref>, every affine weight of 𝔰𝔩(2,ℂ) can be decomposed into fundamental weights (ω_0,ω_1) as followsλ= c_0 ω_0 + c_1 ω_1 + l δ = [c_0,c_1,l],c_0,c_1,l∈ℤsuch that the affine root t_f_1 and a generic monomial Q_f_1^iQ_f_2^j are decomposed ast_f_1=-2ω_0 + 2 ω_1 ,andQ_f_1^i Q_f_2^j=e^2(j-i)(ω_1-ω_0)e^-jδ . Furthermore, in table <ref> in appendix <ref> we give the the first few grades of the affine representation generated by =[1,1,0]. The affine weights which are colored in red are contained in the Weyl orbit of . To make the connection to the remaining weights even more manifest, we rewrite (<ref>) in a slightly different manner: we observe that the Weyl-orbit of the weight λ=[1,1,r] for r∈ℤ can be written ase^-∑_w∈W(-1)^ℓ(w) e^w([1,1,r])=e^-e^rδ∑_w∈W(-1)^ℓ(w) e^w([1,1,0])=e^-e^rδ∑_w∈W(-1)^ℓ(w) e^w() ,such that∑_l=0^∞ e^-∑_w∈W(-1)^ℓ(w) e^w([1,1,-l])=e^-(∑_l=0^∞ e^-lδ)∑_w∈W(-1)^ℓ(w) e^w()=e^-/1-e^-δ ∑_w∈W(-1)^ℓ(w) e^w() .Therefore, we can write𝒵_2,1(τ,t_f_1,t_f_2,m =e^-(1-Q_ρ) ∑_λ∈P^+_1,1 𝒪^1_λ=[c_0,c_1,l](t_f_1,t_f_2) ,where we defined𝒪^1_λ=[c_0,c_1,l](t_f_1,t_f_2)=∑_w∈W(-1)^ℓ(w) e^w(λ) ,and P^+_1,1 is the fundamental Weyl chamber of the affine representation generated by the weight [1,1], i.e. P^+_1,1={[1,1,-l]|l∈ℕ}. Thus (up to a prefactor e^-(1-Q_ρ)), the partition function 𝒵_2,1(τ,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ) can be written as a sum over the states contained in Γ_1,1.Finally, before discussing more general cases m=nϵ with n>1, we remark that in the limit Q_ρ→ 0 we reproduce the partition function 𝒵^line_2,1(τ,m=ϵ, t_f_1,ϵ,-ϵ)lim_Q_ρ→ 0𝒵_2,1(τ,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ)=1-Q_f_1 ,which indeed agrees with (<ref>). From the point of view of the irreducible representation [1,1,0], due to (<ref>), the limit Q_ρ→ 0 corresponds to restricting to states with grade l=0. Indeed, according to the weight diagram in table <ref>, the partition function can thus be written as the sum of two states (λ=[1,1,0] and λ=[3,-1,0])𝒵^line_2,1(τ, t_f_1,m=ϵ,ϵ,-ϵ) =lim_Q_ρ→ 0𝒵_2,1(τ,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ)=∑_w ∈W (-1)^l(w) e^w()-=e^-(ω_0+ω_1)∑_k=0^1(-1)^k∑_[c_0,c_1,l]=[1,1,0]-kα_1e^c_0 ω_0+c_1ω_1+lδ=1-Q_f_1 ,where we used the identification (<ref>).§.§.§ Choice ϵ_1=-ϵ_2=ϵ and m=n ϵ for n>1For m=nϵ with n>1, the partition function is an infinite sum of ratios of theta functions:𝒵_2,1(τ, ρ,t_f_1,m=nϵ,ϵ,-ϵ)=∑_i,j^∞ Q_f_1^i Q_f_2^j ∏_rθ(τ;a_rϵ)/θ(τ;b_rϵ)To illustrate this expression, we first consider in some detail the case n=2 and generic n later. n=2 For n=2 the first few terms of the partition function can be written in the following suggestive form𝒵_2,1(τ,ρ,t_f_1,m=2ϵ,ϵ,-ϵ)= e^-4(ω_0+ω_1)^e^-4[ ϕ^2_[4,4,0]( e^4ω_0+4ω_1 + e^12ω_0-4ω_1 + e^-4ω_0+12 ω_1-4δ + …)+ϕ^2_[6,2,0]( e^6ω_0+2ω_1 + e^10ω_0-2ω_1 + e^-6 ω_0 + 14ω_1 - 6δ + …)+ϕ^2_[8,0,0]( 2e^8ω_0 + 2e^-8ω_0+16 ω_1- 8δ + 2e^24 ω_0 -16 ω_1 - 8δ + …)+ϕ^2_[2,6,-1]( e^2ω_0 + 6 ω_1 -δ + e^14ω_0-6ω_1 -δ + e^-2ω_0+10ω_1 - 3δ + …)+ϕ^2_[4,4,-1]( e^4ω_0 + 4ω_1 - δ + e^12 ω_0 - 4 ω_1 - δ + e^-4ω_0 + 12 ω_1 -5 δ + …)+ϕ^2_[8,0,-1](2e^8ω_0 -δ + 2e^-8ω_0 + 16 ω_1 - 9 δ + 2e^24 ω_0 -16 ω_1 - 9δ + …)+ …]where the notation is the same as in (<ref>). Indeed, the ϕ^2_[c_0,c_1,l] are indexed by their Dynkin labels c_0,c_1 and their grade lϕ^2_[4,4,0](τ,ϵ)=1 ,ϕ^2_[8,0,0](τ,ϵ)=2θ_1(τ;3ϵ)/θ_1(τ;ϵ)ϕ^2_[4,4,-1](τ,ϵ)= θ_1(τ;3ϵ)^2/θ_1(τ;ϵ)^2,ϕ^2_[8,0,-1](τ,ϵ)=-2θ_1(τ;5ϵ)/θ_1(τ;ϵ)ϕ^2_[6,2,0](τ,ϵ)=ϕ^2_[2,6,-1](τ,ϵ)=- θ_1(τ;2ϵ)^2/θ_1(τ;ϵ)^2 Comparing with affine representations of 𝔰𝔩(2,ℂ) (as given in appendix <ref>), we can write the compact partition function (<ref>) as a sum over Weyl orbits of the representatives in the fundamental Weyl chamber P_4,4^+ of the affine [4,4] representation𝒵_2,1(τ,ρ,t_1,t_2,m=2ϵ,ϵ,-ϵ)=e^-4∑_λ∈P_4,4^+ϕ^2_[c_0,c_1,l](τ,ϵ) 𝒪_λ^2(t_f_1,t_f_2)where the individual Weyl orbits are given by 𝒪_λ=[c_0,c_1,l]^2(t_f_1,t_f_2)=d_λ∑_w∈We^w(λ)where the normalization is given byd_λ=[c_0,c_1]= 1/2 ifc_0=0orc_1=0 1elseThe weights of the affine [4,4] that are in the fundamental Weyl chamber P_4,4^+ are those with positive Dynkin labelsP^+_4,4={[0,8,-l],[2,6,-l],[4,4,-l],[6,2,-l],[8,0,-l]}_l∈ℕAs for the finite case there are again weights that are fixed under the action of certain elements of the Weyl group, e.g.s_0[0,8,-2]=[0,8,-2],s_0∈𝒲As for the non-compact cases the arguments of the ϕ^2_[c_0,c_1,l] can be related to their corresponding affine weights λ=[c_0,c_1,l] throughℐ_τ(ϕ^2_λ=[c_0,c_1,l])= (4 |4 ) - (λ|λ) + 8l= 4-1/4c_1^2 - 8l , where (.|.) stands for the inner product in the affine ω_1 basis. Before continuing to the case of generic n, we consider the decompactification limit Q_ρ→ 0. In this case only those weights with ℓ=0 survive, such that (with (<ref>))lim_ρ→ i∞𝒵_2,1(τ,ρ,t_f_1,m=2ϵ,ϵ,-ϵ) =(1+Q_f_1^4)ϕ^2_[4,4,0]+(Q_f_1+Q_f_1^3)ϕ^2_[6,2,0]+2Q_f_1^2ϕ^2_[8,0,0]=(1+Q_f_1^4)-(Q_f_1+Q_f_1^3)θ(τ;2ϵ)^2/θ_1(τ;ϵ)+2Q_f_1^2 θ_1(τ;3ϵ)/θ_1(τ,ϵ)=𝒵^line_2,1(τ,t_f_1,m=2ϵ,ϵ,-ϵ) .This expression indeed agrees with (<ref>) as expected, since in the limit Q_ρ→ 0 the brane setup corresponds to the non-compact configuration (N,M)=(2,1).generic n The above analysis can be extended for n>2 with a pattern arising which allows us to conjecture the structure for generic n: Indeed, we propose that the partition function can be written as a sum over Weyl orbits of the representatives in the fundamental Weyl chamber P_n^2,n^2^+ of the affine [n^2,n^2] representation𝒵_2,1(τ,ρ,t_f_1,m=nϵ,ϵ,-ϵ)= e^-n^2∑_λ∈P_n^2,n^2^+ϕ^n_[c_0,c_1,l](τ,ϵ) 𝒪_λ^n(t_f_1,t_f_2)where the Weyl orbits are given by[Notice that d_λ=1 for all λ∈P^+_1,1 such that no normalisation is required in (<ref>).]𝒪_λ^n(t_f_1,t_f_2)= d_λ∑_w ∈W (-1)^n · l(w) e^w(λ) .The fundamental Weyl chamber P_n^2,n^2^+ is given by P_n^2,n^2^+= { [0,2n^2,-l],[2,2(n^2-1),-l],…,[2(n^2-1),2,-l],[2n^2,0,-l] }_l ∈ℕ .In this case the relation (<ref>) becomesℐ_τ(ϕ^2_λ=[c_0,c_1,l])= (n^2 |n^2 ) - (λ|λ)+(c_0+c_1)^levelkl= n^4-c_1^2/4-kl . Explicit expressions for the coefficient functions ϕ^n_[c_0,c_1,l] are given in appendix <ref>. Finally, due to the fact that ϕ^n_[c_0,c_1,l=0](τ,ϵ)=ϕ^n_[n^2-c_0-c_12](τ,ϵ) ,with ϕ^n_[k] defined in (<ref>), we have in the decompactification limitlim_ρ→ i∞𝒵_2,1(τ,ρ,t_f_1,m=nϵ,ϵ,-ϵ)=𝒵^line_2,1(τ,t_f_1,m=nϵ,ϵ,-ϵ) ,∀ n≥ 1 .as is expected from the point of view of the brane configurations.§.§ Configurations (N,1) for N>2We can generalise the discussion of the previous subsection to cases N>2. For simplicity we restrict to n=1 and show that the partition function can be written as a product over simple positive roots of 𝔞_N-1.The first case corresponds to N=3, i.e. three M5-branes. For the partition function, this requires to sum over three different partitions (ν_1,ν_2,ν_3). Analysing the configurations which lead to a non-trivial contribution, we summarise the first few in the following table (with g_(3,1)^(ν_1,ν_2,ν_3)=(∏_a=1^3(-Q_f_a)^|ν_a|) ∏_a=1^3∏_(i,j)∈ν_aθ_1(τ;z^a_ij) θ_1(τ;v^a_ij)/θ_1(τ;w^a_ij)θ_1(τ;u^a_ij))ν_1 ν_2 ν_3 g_(3,1)^(ν_1,ν_2,ν_3)∅ ∅ ∅1 0.3cm1 ∅ ∅ -Q_f_1 ∅ 0.3cm1 ∅ -Q_f_2 ∅ ∅0.3cm1 -Q_f_30.6cm2 0.3cm1 ∅ Q_f_1^2Q_f_2 ∅ 0.6cm2 0.3cm1Q_f_2^2Q_f_3 0.3cm1 ∅ 0.6cm2Q_f_1Q_f_3^2 0.3cm1,1 ∅ 0.3cm1 Q_f_1^2Q_f_3 0.3cm1 0.3cm1,1 ∅ Q_f_1Q_f_2^2 ∅ 0.3cm1 0.3cm1,1 Q_f_2Q_f_3^2ν_1 ν_2 ν_3 g_(3,1)^(ν_1,ν_2,ν_3)0.3cm1,1 ∅ 0.6cm2 -Q_f_1^2Q_f_3^2 ∅ 0.6cm2 0.3cm1,1 -Q_f_2^2Q_f_3^2 0.8cm3,1 0.6cm2 0.3cm1 -Q_f_1^4Q_f_2^2Q_f_3 0.3cm1 0.8cm3,1 0.6cm2-Q_f_1Q_f_2^4Q_f_3^2 0.6cm2 0.3cm1 0.8cm3,1 -Q_f_1^2Q_f_2Q_f_3^4 0.6cm2,1,1 0.3cm1 0.3cm1,1 -Q_f_1^4Q_f_2Q_f_3^2 0.3cm1,1 0.6cm2,1,1 0.3cm1-Q_f_1^2Q_f_2^4Q_f_3 0.3cm1 0.3cm1,1 0.6cm2,1,1 -Q_f_1Q_f_2^2Q_f_3^4 The first few terms in the partition function therefore take the form𝒵_3,1 (τ,t_f_1,t_f_2,t_f_3,m=ϵ,ϵ,-ϵ)= 1 -(Q_f_1 + Q_f_2 + Q_f_3) +(Q_f_1^2 Q_f_2 + Q_f_1 Q_f_2^2 + Q_f_1^2 Q_f_3 + Q_f_2^2 Q_f_3 + Q_f_1 Q_f_3^2 + Q_f_2 Q_f_3^2) - (Q_f_1^2 Q_f_2^2 + Q_f_1^2 Q_f_3^2 + Q_f_2^2 Q_f_3^2) - (Q_f_1^4 Q_f_2^2 Q_f_3 +Q_f_1^2 Q_f_2^4 Q_f_3 + Q_f_1^4 Q_f_2 Q_f_3^2+Q_f_1 Q_f_2^4 Q_f_3^2 + Q_f_1^2 Q_f_2 Q_f_3^4 + Q_f_1 Q_f_2^2 Q_f_3^4) +…This expansion is matched by the expression[We have checked (<ref>) up to order 12 in the expansion of Q_f_1,2,3.]𝒵_3,1(τ,t_f_1,t_f_2,t_f_3,m=ϵ,ϵ,-ϵ)= ∏_k=1^∞𝒵_3,1^(k)(τ,m=ϵ,t_f_1,t_f_2,t_f_3,ϵ,-ϵ)=∏_k=1^∞ (1-Q_ρ^k)^2 (1-Q_ρ^k-1 Q_1)(1-Q_ρ^k-1 Q_2)(1-Q_ρ^k-1 Q_3) (1 - Q_ρ^k Q_1^-1)(1 - Q_ρ^k Q_2^-1) (1-Q_ρ^k Q_3^-1)=∏_α∈Δ_+(𝔞_2)(1-e^-α) ,where Δ_+(𝔞_2) is the space of positive simple roots of 𝔞_2. Notice the relation𝒵^line_4,1(τ,t_f_1,t_f_2,t_f_3,m=ϵ,ϵ,-ϵ)=1/(1-Q_ρ)(1-Q_f_1Q_f_3) 𝒵_3,1^(1)(τ,m=ϵ,t_f_1,t_f_2,t_f_3,ϵ,-ϵ) .Repeating the computation for N=4, we find up to order 6 in the expansion of Q_f_i that the partition function can be written as𝒵_4,1 (τ,t_f_1,t_f_2,t_f_3,t_f_4,m=ϵ,ϵ,-ϵ)= ∏_k=1^∞𝒵_4,1^(k)(τ,m=ϵ,t_f_1,t_f_2,t_f_3,t_f_4,ϵ,-ϵ)=∏_k=1^∞ (1-Q_ρ^k)^3 (1-Q_ρ^k-1 Q_f_1)(1-Q_ρ^k-1 Q_f_2)(1-Q_ρ^k-1 Q_f_3) (1-Q_ρ^k-1 Q_f_4) × (1 - Q_ρ^k /Q_f_1)(1 - Q_ρ^k/ Q_f_2) (1-Q_ρ^k/Q_f_3)(1-Q_ρ^k/ Q_f_4)× (1 - Q_ρ^k/(Q_f_1Q_f_2))(1 - Q_ρ^k /(Q_f_2Q_f_3)) (1-Q_ρ^k /(Q_f_3Q_f_4))(1-Q_ρ^k/(Q_f_1 Q_f_4))=∏_α∈Δ_+(𝔞_3)(1-e^-α) ,with Q_ρ=Q_f_1Q_f_2Q_f_3Q_f_4. Notice the relation𝒵^line_5,1(τ,t_f_1,t_f_2,t_f_3,t_f_4,m=ϵ,ϵ,-ϵ)=𝒵_4,1^(1)(τ,m=ϵ,t_f_1,t_f_2,t_f_3t_f_4,ϵ,-ϵ)/(1-Q_ρ)^2(1-Q_f_1Q_f_4)(1-Q_f_1Q_f_3Q_f_4)(1-Q_f_1Q_f_2Q_f_4) .§.§ Configuration (N,M)=(2,2)Finally we can similarly discuss cases (N,M) with M>1. The simplest such case is the configuration (2,2) and we shall limit ourselves to the choice m=ϵ_1=-ϵ_2=ϵ. Analysing the partition function 𝒵_2,2(𝐓,𝐭,m=ϵ,ϵ,-ϵ) in the same fashion as above, we can write it in the following suggestive form𝒵_2,2(𝐓, 𝐭,m=ϵ,ϵ,-ϵ) = e^-2(ω_0+ω_1) [( e^2ω_0+2ω_1 + e^6ω_0 - 2ω_1+e^-2ω_0+6ω_1- 2δ+ e^10ω_0-6ω_1-2δ…) -2 θ_1(τ; T̃_1+ϵ)θ_1(τ;T̃_1-ϵ)/θ_1(τ;T̃_1)^2 (e^4ω_0 + e^-4ω_0+ 8 ω_1-4δ + e^12ω_0-8ω_1-4δ + …) -2 θ_1(τ; T̃_1+ϵ)θ_1(τ;T̃_1-ϵ)/θ_1(τ;T̃_1)^2( e^4ω_1-δ + e^8ω_0-4ω_1-δ + e^-8ω_0 + 12 ω_1-9δ…) +2 θ_1(τ;T̃_1+2ϵ)θ_1(τ;T̃_1-2ϵ)/θ_1(τ;T̃_1)^2( e^2ω_0+2ω_1-δ + e^6ω_0-2ω_1-δ + e^-2ω_0+6ω_1-3δ+ …) +…] ,where we have used the same notation as in section <ref>. Comparing (<ref>) with the previous examples, we notice that the partition function can again be written as a sum of Weyl orbits where the affine weights of the representatives lie in the fundamental Weyl chamber of the affine representation P_2,2^+ with highest weight [2,2,0] (see appendix <ref>)𝒵_2,2(τ, T_1,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ)= e^-2 ξ∑_λ∈P_2,2,^+ϕ^1_[c_0,c_1,l](τ,T_1,ϵ) 𝒪_λ(t_f_1,t_f_2) .Here the Weyl orbits are given by𝒪_λ^1(t_f_1,t_f_2)= d_λ ∑_w∈𝒲 e^w(λ) ,withd_λ=[c_0,c_1,-l]={[ 12 ifc_0=0 or c_1=0;1 else ]. where the Weyl reflections are explicitly given as in (<ref>) andthe factor d_λ takes into account the presence of fixed points in the Weyl action. The fundamental Weyl chamber is defined asP_2,2^+= {[0,4,-l],[2,2,-l],[4,0,-l] }_l∈ℕ∪{0} .and the ϕ^1_λ=[c_0,c_1,-l] in (<ref>) are given byϕ^1_[0,4,-l](τ,T_1,ϵ)={[ -2 θ_1(τ;T̃_1+(2r+1)ϵ)θ_1(τ;T̃_1-(2r+1)ϵ)/θ_1(τ;T̃_1)^2ifl=r(r+1)+1 for r∈ℕ∪{0}; 0else ]. ϕ^1_[2,2,-l](τ,T_1,ϵ)={[1 ifl=0; 2 θ_1(τ;T̃_1+2rϵ)θ_1(τ;T̃_1-2rϵ)/θ_1(τ;T̃_1)^2 ifl=r^2 for r∈ℕ;0 else ]. ϕ^1_[4,0,-l](τ,T_1,ϵ)={[ -2 θ_1(τ;T̃_1+(2r+1)ϵ)θ_1(τ;T̃_1-(2r+1)ϵ)/θ_1(τ;T̃_1)^2ifl=r(r+1) for r∈ℕ∪{0}; 0else ].We notice that the arguments of the ϕ^1_[c_0,c_1,l] are related to the affine weights byℐ_τ(ϕ^1_[c_0,c_1,-l])= (2ξ|2ξ)-(λ|λ) + 4l =1 - 1/4c_1^2 - 4l .which directly generalises the cases M=1 discussed above Before closing this section we would like to make a further remark: The brane configuration (N,M)=(2,2) is self-dual under the exchange of N and M. Furthermore, the appearance of the symmetry 𝔞_N-1=1 is due to the expansion of 𝒵_2,2(τ, T_1,ρ,t_f_1,t_f_2,m=ϵ,ϵ,-ϵ) with respect to Q_t_f_1,2 and we would expect a similar structure with respect to Q̅_1,2. It is therefore interesting to see whether the partition function can be written in a fashion that makes a symmetry 𝔞_N-1=1⊗𝔞_M-1=1 manifest. To this end, we first have to re-instate the normalisation factor (W_M=2(∅))^N=2|_ϵ_1=-ϵ_2=m=ϵ in (<ref>). The latter can be read off from (<ref>)W_2(∅)|_ϵ_1=-ϵ_2=m=ϵ =lim_ρ→ i∞𝒵_2,1(ρ,T_1,T_2,m=ϵ,ϵ,-ϵ)=(∏_k=1^∞(1-Q̅_1Q_τ^k))(∏_k=1^∞(1-Q_τ^k/Q̅_1)(1-Q_τ)^k)=-iQ̅_1^1/2/Q_τ^1/8 θ_1(τ;T̃_1) .Thus, multiplying the coefficient functions (<ref>) – (<ref>) with (W_2(∅))^2|_ϵ_1=-ϵ_2=m=ϵ, the non-trivial ϕ^1_λ are (up to integer coefficients) of the form -Q̅_1/Q_τ^1/4 θ_1(τ;T̃_1+kϵ)θ_1(τ;T̃_1-kϵ)for k∈ℕ∪{0}. Upon introducingT_1=-2κ_0+2κ_1 ,Q_τ=e^-μ ,which mirror (<ref>) and (<ref>) such thatQ̅_1^a Q̅_2^b=e^2(b-a)(κ_1-κ_0)-bμ ,∀ a,b∈ℕ ,we can write for (<ref>)-Q̅_1/Q_τ^1/4 θ_1(τ;T̃_1+kϵ)θ_1(τ;T̃_1-kϵ)= e^-2(κ_0+κ_1)∑_λ∈P_2,2,^+φ^1_[c_0,c_1,l](k,ϵ) 𝒪_λ(T_1,T_2) .where we denote e^-2(κ_0+κ_1)=e^-2ζ as the Weyl vector of a_M-1=1 and φ^1_[0,4,-l](k,ϵ)={[ -(ϵ^-k(2s+1)+ϵ^k(2s+1))ifl=s(s+1)+1 for s∈ℕ∪{0}; 0else ]. φ^1_[2,2,-l](k,ϵ)={[ 1if l=0;ϵ^2ks+ϵ^-2ksif l=s^2 for s∈ℕ; 0else ]. φ^1_[4,0,-l](k,ϵ)={[ -(ϵ^-k(2s+1)+ϵ^k(2s+1))ifl=s(s+1) for s∈ℕ∪{0}; 0else ].which exactly mirror (<ref>) – (<ref>). Thus the partition function can be written in the form𝒵_2,2(𝐓, 𝐭,m=ϵ,ϵ,-ϵ) =e^-2(ζ+) ∑_λ_1,λ_2∈P^+_2,2 𝔭^1_λ_1,λ_2(ϵ) 𝒪_λ_1(t_f_1,t_f_2) 𝒪_λ_2(T_1,T_2) .where the non-vanishing coefficients 𝔭^1_λ_1,λ_2(ϵ) are (with s,s'∈ℕ∪{0} and r,r'∈ℕ)𝔭^1_[0,4,-s(s+1)-1],[c_0,c_1,-l](ϵ)={[ 2 (ϵ^-(2s+1)(2s'+1)+ϵ^(2s+1)(2s'+1)) if (c_0,c_1)=(0,4) and l=s'(s'+1)+1; -2 if(c_0,c_1)=(2,2) and l=0;-2 (ϵ^-(2s+1)2r+ϵ^(2s+1)2r) if(c_0,c_1)=(2,2) and l=r^2;2 (ϵ^-(2s+1)(2s'+1)+ϵ^(2s+1)(s'+1)) if (c_0,c_1)=(4,0) and l=s'(s'+1) ]. 𝔭^1_[2,2,0],[c_0,c_1,-l](ϵ)={[ -2 if (c_0,c_1)=(0,4) and l=s'(s'+1)+1;1 if(c_0,c_1)=(2,2) and l=0;2 if(c_0,c_1)=(2,2) and l=r^2; -2 if (c_0,c_1)=(4,0) and l=s'(s'+1) ]. 𝔭^1_[2,2,-r^2],[c_0,c_1,-l](ϵ)={[-2 (ϵ^-2r(2s'+1)+ϵ^2r(2s'+1)) if (c_0,c_1)=(0,4) and l=s'(s'+1)+1;2 if(c_0,c_1)=(2,2) and l=0; 2 (ϵ^-4rs'+ϵ^4rs') if (c_0,c_1)=(2,2) and l=s'^2;-2 (ϵ^-2r(2s'+1)+ϵ^2r(2s'+1)) if (c_0,c_1)=(4,0) and l=s'(s'+1) ]. 𝔭^1_[4,0,-s(s+1)],[c_0,c_1,-l](ϵ)={[ 2 (ϵ^-(2s+1)(2s'+1)+ϵ^(2s+1)(2s'+1)) if (c_0,c_1)=(0,4) and l=s'(s'+1)+1; -2 if(c_0,c_1)=(2,2) and l=0;-2 (ϵ^-(2s+1)2r+ϵ^(2s+1)2r) if(c_0,c_1)=(2,2) and l=r^2; 2 (ϵ^-(2s+1)(2s'+1)+ϵ^(2s+1)(2s'+1)) if (c_0,c_1)=(4,0) and l=s'(s'+1) ].Notice in particular that 𝔭^1_λ_1,λ_2(ϵ)=𝔭^1_λ_2,λ_1(ϵ). Therefore, the form (<ref>) makes the duality of the partition function under the exchange (N,M)⟷ (M,N) manifest.§ GENERIC CONFIGURATION (N,M) AND REPRESENTATIONSAfter the analysis of many specific cases we compile in this section generic relations that we conjecture to hold for arbitrary N , M and n. As the non-compact case is obtained as a limit of the compact case we start with the latter. We propose that the normalised partition <ref> can be written as a sum over the Weyl orbits for the representative weights in the fundamental Weyl chamber P^+_Mn^2,…,Mn^2 of the affine highest weight representation generated by [Mn^2,…,Mn^2_N,0] of 𝔰𝔩(N,ℂ)𝒵_N,M(τ, T,ρ,t_f,m=nϵ,ϵ,-ϵ) = e^-Mn^2∑_λ∈P^+_Mn^2,…,Mn^2ϕ^n,M_[c_0,…,c_N-1,l](τ,T,ϵ)𝒪_λ^n(t_f) where =ω_0 + … + ω_N-1 denotes the affine Weyl vector for𝔰𝔩(N,ℂ) defined in terms of the fundamental weights ω_i and the Weyl orbits are given by𝒪_λ^n(t_f)= ∑_w ∈𝒲 (-1)^Mn · l(w)e^w(λ) .The ϕ^n,M_λ are given by ratios of Jacobi theta functions. They transform under SL(2,ℤ)_τ transformations asϕ^n,M_[c_0,…,c_N-1,l](-1τ,𝐓τ,ϵτ)=e^2 i πℐ_τ ϵ^2ϕ^n,M_[c_0,…,c_N-1,l](τ,𝐓,ϵ) .where the index ℐ_τ is related to the Dynkin labels by ℐ_τ(ϕ^n,M_λ=[c_0,…,c_N-1,l])= (Mn^2 |Mn^2 ) - (λ|λ) + klwhere (.|.) stands for the inner product in the basis of affine fundamental weights (ω_1,…,ω_N-1). The partition functions of non-compact brane configurations are obtained by taking the limitt_f_N→∞δ→∞ .From the point of view of affine representations this means that we only keep the weights with grade l=0 as e^-lδ→ 0. The remaining states fall into the corresponding 𝔰𝔩(N,ℂ) representations with the non-affine counterpart of λ as highest weight vectorλ=(λ,k,l) ⟶λ .The affine Weyl group 𝒲 reduces to the finite one 𝒲. Futhermore the ϕ's at grade 0 are identified with their non-affine counterparts in the following wayϕ^n,M_[c_0,c_1,…,c_N-1,0](𝐓,ϵ) = ϕ^n,M_[c_1,…,c_N-1](𝐓,ϵ) .After taking the limit we are thus left with a sum over the Weyl orbits fot he representative weights in the fundamental Weyl chamber P_Mn^2,…,Mn^2 of the irreducible highest weight representation Γ_Mn^2,…,Mn^2 of 𝔰𝔩(N,ℂ)𝒵_N,M^line(T,t_f_1,…,t_f_N-1,m=nϵ,ϵ,-ϵ)=e^-Mn^2∑_λ∈ P_Mn^2,…,Mn^2^+ϕ_[c_1,…,c_N-1](T,ϵ) 𝒪_λ^n(t_f_1,…,t_N-1) ,with the finite Weyl orbits given by𝒪_λ^n(t_f_1,…,t_f_N-1)=∑_w∈𝒲(-1)^Mn· l(w)e^w(λ) .The index (<ref>) reduces to[Notice that the transformation (<ref>) is compatible with the decompactification limit.] ℐ=(Mn^2,Mn^2)-(λ,λ) ,where (.,.) denotes the inner product in the basis of fundamental weights (ω_1,…,ω_N-1).§ CONCLUSIONSIn this paper we have studied the BPS partition functions of N parallel M5-branes probing a transverse ALE_A_M-1 space. We have distinguished cases of M5-branes separated along 𝕊^1 (with partition function 𝒵_N,M defined in (<ref>)) and along ℝ (with partition function 𝒵^line_N,M defined in (<ref>). The latter can be obtained from the former through the decompactification limit that sends one of the distances t_f_a of the branes to infinity.To regularise the BPS partition functions, a set of deformation parameters, denoted by (m,ϵ_1,ϵ_2) needs to be introduced. For simplicity, we have chosen to work in the so-called unrefined limit ϵ_1=-ϵ_2=ϵ. Furthermore, motivated by studying the holonomy structure of the supercharges (from the point of view of the M-string world-sheet theory), we have imposed m=nϵ for n∈ℕ. We have demonstrated in a large series of examples (and conjecture that our results hold for generic values of N,M,n∈ℕ) that this limit exhibits an 𝔞_N-1 (or affine 𝔞_N-1) symmetry of the BPS counting function. Indeed, in the case of non-compact brane configurations, (after a suitable normalisation) 𝒵_N,M^line depends only polynomially on Q_f_a=e^-t_f_a. Upon identifying the latter with the roots of 𝔞_N-1, the partition function can be organised as a sum of orbits of S_N which is the Weyl group of 𝔞_N-1. Furthermore, the representatives of each orbit fall into the fundamental Weyl chamber P^+_Mn^2,…,Mn^2 of the irreducible representation Γ_Mn^2,…,Mn^2 of 𝔞_N-1.For compact brane configurations, the (suitably normalised) partition function 𝒵_N,M(m=nϵ) is no-longer polynomial in the Q_f_a. Nevertheless, it can be arranged in a similar fashion as a sum over weights that form a single integrable representation of the affine Lie algebra 𝔞_N with highest weight [Mn^2,…,Mn^2,0]. We have again demonstrated this behaviour explicitly for a large number of examples and based on the emergent pattern conjecture that it holds in general.Finally, compact brane configurations enjoy the duality (N,M)⟷ (M,N). For the case (N,M)=(2,2) we have made this duality manifest in the full partition function 𝒵_2,2 by writing it as a double sum of weights in the fundamental domain P^+_2,2 of 𝔞_1. This presentation of the partition function also makes the structure of X_2,2≅ X_1,1/(ℤ_2×ℤ_2) more tangible, which is dual to the M5-brane configuration. These results make the algebraic properties of the BPS counting functions of specific M-brane configurations very tangible: indeed, in certain regions of the parameter space, the partition functions fall into the form of single highest weight representations of (affine) Lie algebras that are related to the geometric backgrounds of the M-brane configurations. While the results presented here are specific to the choice m=nϵ, the 𝔞_N symmetries are expected to be unbroken for generic deformations as well: indeed the dual Calabi-Yau manifolds X_N,M can be understood as elliptic fibrations over A_N-1. Thus, our results have highlighted a region in the moduli space in which the latter are very manifest.In view of the many other physical systems that are dual to the M-brane configurations that we have studied here, we expect our results to have many applications in the future: one of them is the study of little string theories (LSTs) <cit.> (see also <cit.> for reviews). Indeed, the compact brane configurations (N,M) are related to a particular class of LSTs <cit.> with 𝒩=(1,0) supersymmetry. It will be interesting in the future to find further regions in the moduli spaces of LSTs which make more of their symmetries manifest or possibly reveal new ones. Furthermore, our findings may also turn out useful to study algebraic properties of double-quantised Seiberg-Witten geometry related to the topological string partition function of X_N,M and the definition of qq-characters (see <cit.> and <cit.> for recent progress respectively). . Finally, an interesting open question remains why in the limits we have discussed in this work, the partition function is governed by a single irreducible/integrable representation. While we have argued, based on the structure of the preserved supercharges, that the choice (<ref>) and (<ref>) leads to cancellations among different states in the partition function (and thus to massive simplifications) it does not fully explain why the remaining contributions have the structure of a single representation. As was pointed out to us by A. Iqbal, it would be interesting to study these results from the point of view of Chern Simons theory (see e.g. <cit.>) to see if one can find an interpretation from this side. We leave this possibility for future work.§ ACKNOWLEDGEMENTSWe would like to thank A. Iqbal for many stimulating discussions and very useful comments on the current manuscript. Furthermore,SH would like to thank S.J. Rey for many enlightening discussions. § AFFINE LIE ALGEBRAS §.§ Central Extension of Simple Lie AlgebrasIn this appendix we follow <cit.> and <cit.>. Reviews of this material can also be found in <cit.>. The affine Lie algebra 𝔤 has the following decomposition𝔤=𝔤⊗ℂ[t,t^-1]_𝔤̃⊕ℂk⊕ℂL_0where 𝔤̃ corresponds to the so called loop algebra. For a generator J^a ∈𝔤 the corresponding elements of the loop algebra take the formJ^a⊗ t^l = J^a_l ∈𝔤̃, l∈ℤThe loop algebra is then centrally extended in a non-trivial way[All central extensions for simple Lie algebras turn out to be trivial. <cit.>] by the addition of k with the property that it commutes with all the generators[J^a_l,k]=0It can be shown <cit.> that this extension is unique.[Notice that the abelian subalgebra { H^1_0,...,H^r_0,k} is not maximally abelian. To define theCartan subalgebra L_0 needs to be included.] L_0 is the so called grading operator defined as a differential operator in t, whose eigenvalue l in the sense[L_0,J^a_l]=-lJ^a_lis called the grade of J^a_l. The eigenvectors under the action of ad(H^i_0) and ad(k) on the generators E^α_l are infinitely degenerate. It is thus necessary to introduce L_0 so that {H^1_0,...,H^r_0,k,L_0 } forms a Cartan subalgebra. An affine weight λ can thus be denoted by its eigenvalues under the Cartan subalgebraλ=(λ;k;l)where λ is the corresponding weight in the finite Lie algebra 𝔤. The inner product between affine weights is defined as(λ|μ)=(λ|μ) + k_λl_μ+k_μl_λwhere the first term on the right hand side is the inner product between finite weights. At the level of the root system the construction can be seen as follows. The root system Δ of any finite dimensional Lie algebra 𝔤 (whose basis is given by the simple positive roots α_i) contains a highest root θ∈Δ, such thatθ+α_i∉Δ ,∀ i=1,…,r . We can use θ to extend the root lattice Λ_𝔤. To this end, we introduce the lattice Π^1,1 spanned by {β_1,β_2} whose inner product satisfies(β_1|β_2)=1 ,(β_1|β_1)=(β_2|β_2)=0 ,(β_1|α_i)=(β_2|α_i)=0∀ i=1,…,r .We now define the root lattice Λ_𝔤 of the new algebra 𝔤 byΛ_𝔤=∑_a=0^rℤ α_a⊂Λ_𝔤⊕Π^1,1 ,which is spanned by the new set of simple affine roots{α_0=β_1-θ,α_i=1,…,r} .In complete analogy to the finite simple Lie algebra 𝔤 the affine Weyl group 𝒲 is defined to be the group generated by reflections with respect to the affine roots. As there is an infinity of the latter the Weyl group is infinite as well. We will give further details for the specific case α_1.The examples that we will mostly deal with in this work is the affine extension of 𝔞_1, which we shall briefly discuss below. §.§ Lie Algebra 𝔞_1 The affine counterpart to the highest root of 𝔞_1 isθ=α_1=(α_1;0;0)The null root δ is definedas δ=β_1=(0;0;1)The term null root comes from the fact (δ|δ)=0. Thus, the simple positive roots of 𝔞_1 areα_0=δ-α_1=(-α_1;0;1) ,andα_1=(α_1;0;0) .The root system of 𝔞_1 contains inifnitely many (imaginary) roots and the explicit expression can be found in <cit.>Δ={±α_1+nδ |n∈ℤ}∪{kδ |k∈ℤ∖{ 0}} ,such that the positive roots areΔ_+={α_1+nδ |n∈ℕ∪{0}}∪{-α_1+nδ |n∈ℕ}∪{kδ |k∈ℕ} .In analogy to the finite Lie algebras one can introduce the fundamental weights. In the case of 𝔞_1 they are given byω_0= (0;1;0) ,andω_1=(1;2;0) .Every affine weight λ can be decomposed asλ=λ_0 ω_0 + λ_1 ω_1 + lδ ,λ_0,λ_1,l ∈ℤwhere λ_0,λ_1 are the so called Dynkin labels. λ_1 corresponds to the finite Dynkin label corresponding to the associated finite weight λ. λ_0 is related to the level eigenvalue k byλ_0=k-λ_1Alternatively to (<ref>) we can label the affine weights by their Dynkin labels and by their gradeλ= [ λ_0,λ_1 ,l] ,which is the notation we will use in the main part of this work. In the affine case the Weyl vector cannot be defined in terms of the positive roots as their is an infinity of them. The definition as the sum of the fundamental weights is still valid= ω_0 + ω_1 =[1,0,0]+[0,1,0]= [1,1,0]The Weyl group 𝒲(𝔞_1) is generated by two elements s_0,s_1 which correspond to the reflections with respect to α_0 and α_1. Their action on affine weights is given as followss_0[λ_0,λ_1,l]=[-λ_0,λ_1+2λ_0,l-λ_0] s_1[λ_0,λ_1,l]=[λ_0+2λ_1,-λ_1,l]From this we immediately see that the action of s_0 changes the grade whereas the action of s_1 does not affect it. § REPRESENTATION THEORY OF 𝔰𝔩(2,ℂ) AND 𝔰𝔩(3,ℂ)In this section we review representations of 𝔰𝔩(2,ℂ) and 𝔰𝔩(3,ℂ) which are relevant for the discussion in section <ref>. Our notation mainly follows <cit.> (see also <cit.>).§.§ Irreducible Representations of 𝔰𝔩(2,ℂ)We recall that the Lie algebra 𝔰𝔩(2,ℂ) is generated by (H,X,Y) which satisfy the commutation relations[H,X]=2X ,[H,Y]=-2Y ,[X,Y]=H .As explained in <cit.>, irreducible representation Γ_n of 𝔰𝔩(2,ℂ) (with n∈ℕ) can be decomposed asΓ_n=⊕_m=0^n V_n-2m .Here the one-dimensional eigenspaces V_α are eigenspaces of H with weight α, i.e. H· v=α v ,∀ v∈ V_α ,while the operators X and Y map from one eigenspace to anotherX: V_α⟶ V_α+2 ,andY: V_α⟶ V_α-2 ,as well as X· v=0 ,∀ v∈ V_n ,Y· w=0 ,∀ w∈ V_-n .Graphically, the structure of V (and the action of all generators) can be represented as follows at (-7.3,0) 0; [red] at (-5,0) V_-n; [red] at (-2.5,0) V_-n+2; at (0,0) ⋯; [red] at (2.5,0) V_n-2; [red] at (5,0) V_n; at (7.5,0) 0;[->, ultra thick] (4.5,0) – (3.2,0); at (3.9,-0.3) Y; [->, ultra thick] (1.8,0) – (0.5,0); at (1.2,-0.3) Y; [->, ultra thick] (-0.5,0) – (-1.8,0); at (-1.2,-0.3) Y; [->, ultra thick] (-3.2,0) – (-4.5,0); at (-3.7,-0.3) Y; [->, ultra thick] (-5.5,0) – (-7,0); at (-6.2,-0.3) Y;[ultra thick, ->,blue] (-4.9,0.3) to [out=45,in=180] (-3.8,0.7) to [out=0,in=135] (-2.6,0.3); [blue] at (-3.7,1) X; [ultra thick, ->,blue] (-2.4,0.3) to [out=45,in=180] (-1.2,0.7) to [out=0,in=135] (-0.1,0.3); [blue] at (-1.2,1) X; [ultra thick, ->,blue] (0.1,0.3) to [out=45,in=180] (1.2,0.7) to [out=0,in=135] (2.4,0.3); [blue] at (1.2,1) X; [ultra thick, ->,blue] (2.6,0.3) to [out=45,in=180] (3.9,0.7) to [out=0,in=135] (4.9,0.3); [blue] at (3.8,1) X; [ultra thick, ->,blue] (5.1,0.3) to [out=45,in=180] (6.4,0.7) to [out=0,in=135] (7.4,0.3); [blue] at (6.4,1) X;[ultra thick,->] (4.9,-0.3) to [out=225,in=90] (4.7,-0.6) to [out=270,in=180] (5,-0.8) to [out=0,in=270] (5.3,-0.6) to [out=90,in=315] (5.1,-0.3); at (5,-1.1) H; [ultra thick,->,xshift=-2.5cm] (4.9,-0.3) to [out=225,in=90] (4.7,-0.6) to [out=270,in=180] (5,-0.8) to [out=0,in=270] (5.3,-0.6) to [out=90,in=315] (5.1,-0.3); at (2.5,-1.1) H; [ultra thick,->,xshift=-7.5cm] (4.9,-0.3) to [out=225,in=90] (4.7,-0.6) to [out=270,in=180] (5,-0.8) to [out=0,in=270] (5.3,-0.6) to [out=90,in=315] (5.1,-0.3); at (-2.5,-1.1) H; [ultra thick,->,xshift=-10cm] (4.9,-0.3) to [out=225,in=90] (4.7,-0.6) to [out=270,in=180] (5,-0.8) to [out=0,in=270] (5.3,-0.6) to [out=90,in=315] (5.1,-0.3); at (-5,-1.1) H;Furthermore, for given n∈ℕ the irreducible representation Γ_n can be written as Γ_n=Sym^nV ,withV≅ℂ^2 .Explicitly, apart from the trivial representation Γ_0, we present the first few irreducible representations by specifying the weights of the underlying subspacesΓ_1=V_-1⊕ V_14cm[->] (-2,0) – (2,0);at (0,0) |;at (0,-0.4) 0;at (2,-0.3) n; [red] at (-1,0) ∙;at (-1,0.3) -1; [red] at (1,0) ∙;at (1,0.3) 1;Γ_2=V_-2⊕ V_0⊕ V_26cm[->] (-3,0) – (3,0); [red] at (0,0) ∙;at (0,0.3) 0;at (3,-0.3) n; [red] at (-2,0) ∙;at (-2,0.3) -2; [red] at (2,0) ∙;at (2,0.3) 2;Γ_3=V_-3⊕ V_-1⊕ V_1⊕ V_38cm[->] (-4,0) – (4,0);at (0,0) |;at (0,-0.4) 0;at (4,-0.3) n; [red] at (-3,0) ∙;at (-3,0.3) -3; [red] at (-1,0) ∙;at (-1,0.3) -1; [red] at (1,0) ∙;at (1,0.3) 1; [red] at (3,0) ∙;at (3,0.3) 3;Γ_4=V_-4⊕ V_-2⊕ V_0⊕ V_2⊕ V_410cm[->] (-5,0) – (5,0); [red] at (0,0) ∙;at (0,0.3) 0;at (5,-0.3) n; [red] at (-4,0) ∙;at (-4,0.3) -4; [red] at (-2,0) ∙;at (-2,0.3) -2; [red] at (2,0) ∙;at (2,0.3) 2; [red] at (4,0) ∙;at (4,0.3) 4;§.§ Irreducible Representations of 𝔰𝔩(3,ℂ)Following <cit.>, in order to describe the structure of representations of 𝔰𝔩(3,ℂ), we first recall the Cartan-Weyl decomposition𝔰𝔩(3,ℂ)≅𝔥⊕(⊕_α∈ S𝔤_α)where 𝔥 is the Cartan subalgebra, which is defined as𝔥={([ c_1 0 0; 0 c_2 0; 0 0 c_3 ])| c_1,2,3∈ℂ and c_1+c_2+c_3=0} , along with its dual (with i=1,2,3)𝔥^*=Span_ℂ(L_1,L_2,L_3)/{L_1+L_2+L_3=0} ,withL_i ([ c_1 0 0; 0 c_2 0; 0 0 c_3 ])=c_i .Furthermore we haveS={L_i-L_j|i,j=1,2,3 and i≠ j}⊂𝔥^*and the (one-dimensional) root-space 𝔤_L_i-L_j is generated by the 3× 3 matrix E_ij whose component (i,j)=1, while all other entries are zero. While each H⊂𝔥 maps each of the 𝔤_α into itself, we have for the adjoint action ad(X)(Y)=[X,Y](with X∈𝔤_α and Y∈𝔤_β)ad(𝔤_α): 𝔤_β⟶𝔤_α+β .As in the case of 𝔰𝔩(2,ℂ), this action can be represented graphically in the form of 'translations' <cit.>. Indeed, while the subspaces 𝔤_α can be graphically represented on a two-dimensional (hexagonal) lattice, the adjoint action of a given X∈𝔤_α acts through translation, e.g. for X∈𝔤_L_1-L_3 we have schematically 0.6316.3cm[scale=1.2][fill=black] (2,-3.4641) circle (0.15); [fill=black] (-2,-3.4641) circle (0.15); [fill=black] (4,0) circle (0.15); [fill=black] (2,3.4641) circle (0.15); [fill=black] (-2,3.4641) circle (0.15); [fill=black] (-4,0) circle (0.15);(-4,0) – (4,0); (-2,3.4641) – (2,3.4641); (-2,-3.4641) – (2,-3.4641);(-2,-3.4641) – (-4,0); (2,-3.4641) – (-2,3.4641); (4,0) – (2,3.4641);(2,-3.4641) – (4,0); (-2,-3.4641) – (2,3.4641); (-4,0) – (-2,3.4641);[dashed] (0,-4) – (0,4); [dashed] (-2,3.4641) – (4,0); [dashed] (2,3.4641) – (-4,0); [dashed] (-2,-3.4641) – (4,0); [dashed] (2,-3.4641) – (-4,0); [dashed] (3.4641,2) – (-3.4641,-2); [dashed] (3.4641,-2) – (-3.4641,2);[red, ultra thick] (0,0) – (0,2.309); [red, ultra thick] (0,0) – (2,-1.155); [red, ultra thick] (0,0) – (-2,-1.155); [fill=black] (0,0) circle (0.03); [red] at (0,2.309) ∙; [red] at (2,-1.155) ∙; [red] at (-2,-1.155) ∙; [red] at (0.3,1.9) L_2; [red] at (2,-0.8) L_1; [red] at (-2,-0.8) L_3;at (0.3,-0.2) 0; at (2.9,3.4641) L_2-L_3; at (-2.9,3.4641) L_2-L_1; at (-4.9,0) L_3-L_1; at (4.9,0) L_1-L_3; at (-2.9,-3.4641) L_3-L_2; at (2.9,-3.4641) L_1-L_2;[ultra thick, ->, blue] (0.3,0.3) to [out=25,in=180] (2.1,0.7) to [out=0,in=155] (3.7,0.3); [ultra thick, ->, blue,xshift=-4cm] (0.3,0.3) to [out=25,in=180] (2.1,0.7) to [out=0,in=155] (3.7,0.3); [ultra thick, ->, blue,xshift=-2cm,yshift=3.4641cm] (0.3,0.3) to [out=25,in=180] (2.1,0.7) to [out=0,in=155] (3.7,0.3); [ultra thick, ->, blue,xshift=-2cm,yshift=-3.4641cm] (0.3,0.3) to [out=25,in=180] (2.1,0.7) to [out=0,in=155] (3.7,0.3); [ultra thick, ->, blue,xshift=4cm] (0.3,0.3) to [out=25,in=180] (2.1,0.7) to [out=0,in=155] (3.7,0.3); [ultra thick, ->, blue,xshift=2cm,yshift=3.4641cm] (0.3,0.3) to [out=25,in=180] (2.1,0.7) to [out=0,in=155] (3.7,0.3); [ultra thick, ->, blue,xshift=2cm,yshift=-3.4641cm] (0.3,0.3) to [out=25,in=180] (2.1,0.7) to [out=0,in=155] (3.7,0.3);at (7.9,0) 0; at (5.9,3.4641) 0; at (5.9,-3.4641) 0;Irreducible representations of 𝔰𝔩(3,ℂ) follow a similar pattern: Indeed, as explained in <cit.>, for any two integers n,m∈ℕ there exists a finite dimensional irreducible representation V_n,m which enjoys a weight decomposition V_n,m=⊕ V_α. The (one-dimensional) subspaces V_α are characterised through their weights and are created from the heighest weight subspace V_nL_1-mL_3 through application of the generators E_2,1, E_3,1 and E_3,2.Apart from the trivial representation (m=n=0), we have the following weight diagrams for Γ_1,0≅ℂ^3 and its dual Γ_0,1Γ_1,0:5cm0.635.8cm[scale=1.2] (-2,0) – (2,0);(-1,1.732) – (1,1.732);(-1,-1.732) – (1,-1.732);(-1,1.732) – (1,-1.732);(-1,-1.732) – (1,1.732);(-2,0) – (-1,1.732);(-2,0) – (-1,-1.732);(2,0) – (1,1.732);(2,0) – (1,-1.732);[fill=black] (2,0) circle (0.15); [fill=black] (-1,-1.732) circle (0.15); [fill=black] (-1,1.732) circle (0.15); at (2.5,0) L_1;at (-1.3,2.1) L_2;at (-1.3,-2.1) L_3;at (0.3,0.2) 0; andΓ_0,1:5cm0.635.8cm[scale=1.2] (-2,0) – (2,0);(-1,1.732) – (1,1.732);(-1,-1.732) – (1,-1.732);(-1,1.732) – (1,-1.732);(-1,-1.732) – (1,1.732);(-2,0) – (-1,1.732);(-2,0) – (-1,-1.732);(2,0) – (1,1.732);(2,0) – (1,-1.732);[fill=black] (-2,0) circle (0.15); [fill=black] (1,-1.732) circle (0.15); [fill=black] (1,1.732) circle (0.15); at (2.5,0) L_1;at (-1.3,2.1) L_2;at (-1.3,-2.1) L_3;at (0.3,0.2) 0;More generally, e.g. the weight diagram of a representation Γ_m,n for generic (m,n) consist of hexagons and triangles that are concentric to the origin. The hexagons have vertices at (m-i)L_1-(n-i)L_3 for i=0,…min(m,n)-1 and the triangles have vertices at (m-n-3j)L_1 for j=0,…⌊(m-n)/3⌋, e.g. for (m,n)=(2,4) we have 0.6318.8cm[scale=1.2] (-4,3.464) – (4,3.464); (4,3.464) – (6,0); (-4,3.464) – (-6,0); (-6,0) – (-2,-6.928); (-2,-6.928) – (2,-6.928); (2,-6.928) – (6,0);(-5,1.732) – (5,1.732); (-6,0) – (6,0); (-5,-1.732) – (5,-1.732); (-4,-3.464) – (4,-3.464); (-3,-5.196) – (3,-5.196);(-5,-1.732) – (-2,3.464); (-4,-3.464) – (0,3.464); (-3,-5.196) – (2,3.464); (-2,-6.928) – (4,3.464); (0,-6.928) – (5,1.732);(-5,1.732) – (0,-6.928); (-4,3.464) – (2,-6.928); (-2,3.464) – (3,-5.196); (0,3.464) – (4,-3.464); (2,3.464) – (5,-1.732); [dashed] (0,-1.154) – (0,3.464);[dashed] (0,-1.154) – (-4,-3.464); [dashed] (0,-1.154) – (4,-3.464);[fill=black] (-4,3.464) circle (0.15); [fill=black] (-2,3.464) circle (0.15); [fill=black] (0,3.464) circle (0.15); [fill=black] (2,3.464) circle (0.15); [fill=black] (4,3.464) circle (0.15);[fill=black] (5,1.732) circle (0.15); [fill=black] (3,1.732) circle (0.15); (3,1.732) circle (0.25); [fill=black] (1,1.732) circle (0.15); (1,1.732) circle (0.25); [fill=black] (-1,1.732) circle (0.15); (-1,1.732) circle (0.25); [fill=black] (-3,1.732) circle (0.15); (-3,1.732) circle (0.25); [fill=black] (-5,1.732) circle (0.15);[fill=black] (-6,0) circle (0.15); [fill=black] (-4,0) circle (0.15); (-4,0) circle (0.25); [fill=black] (-2,0) circle (0.15); (-2,0) circle (0.25); [fill=black] (0,0) circle (0.15); (0,0) circle (0.25); [fill=black] (2,0) circle (0.15); (2,0) circle (0.25); [fill=black] (4,0) circle (0.15); (4,0) circle (0.25); [fill=black] (6,0) circle (0.15);[fill=black] (-5,-1.732) circle (0.15); [fill=black] (-3,-1.732) circle (0.15); (-3,-1.732) circle (0.25); [fill=black] (-1,-1.732) circle (0.15); (-1,-1.732) circle (0.25); [fill=black] (1,-1.732) circle (0.15); (1,-1.732) circle (0.25); [fill=black] (3,-1.732) circle (0.15); (3,-1.732) circle (0.25); [fill=black] (5,-1.732) circle (0.15);[fill=black] (-4,-3.464) circle (0.15); [fill=black] (-2,-3.464) circle (0.15); (-2,-3.464) circle (0.25); [fill=black] (0,-3.464) circle (0.15); (0,-3.464) circle (0.25); [fill=black] (2,-3.464) circle (0.15); (2,-3.464) circle (0.25); [fill=black] (4,-3.464) circle (0.15);[fill=black] (-3,-5.196) circle (0.15); [fill=black] (-1,-5.196) circle (0.15); (-1,-5.196) circle (0.25); [fill=black] (1,-5.196) circle (0.15); (1,-5.196) circle (0.25); [fill=black] (3,-5.196) circle (0.15);[fill=black] (-2,-6.928) circle (0.15); [fill=black] (0,-6.928) circle (0.15); [fill=black] (2,-6.928) circle (0.15);at (0,3.9) 4L_2; at (0,-7.5) -5L_2; at (7.1,0) 2L_1-4L_3; at (-7.1,0) 2L_3-4L_1; at (2.9,-7.5) 2L_1-4L_2; at (-2.9,-7.5) 2L_3-4L_1; at (4.5,-3.7) 4L_1; at (-4.5,-3.7) 4L_3; at (4.8,3.9) 2L_2-4L_3; at (-4.8,3.9) 2L_2-4L_1;at (0.55,0.35) L_2; at (1,-1.15) L_1; at (-1,-1.15) L_3; at (0,-1.5) 0;[style=line width=2.6pt] (-4,3.464) – (4,3.464) – (6,0) – (2,-6.928) – (-2,-6.928) – (-6,0) – (-4,3.464); [style=line width=2.6pt] (-3,1.732) – (3,1.732) – (4,0) – (1,-5.196) – (-1,-5.196) – (-4,0) – (-3,1.732); [style=line width=2.6pt] (-2,0) – (2,0) – (0,-3.464) – (-2,0);The multiplicity (i.e. the dimension of the corresponding subspace of Γ_m,n) is (i+1) for the ith hexagon and min(m,n) for the triangles in the weight diagram. In the above picture we have indicated the double multiplicity of certain weights by 0.5cm[scale=0.7][fill=black] (0,0) circle (0.15);(0,0) circle (0.25);.In the case of m=n (which is the most important for us) the diagram consists of concentric regular hexagons (while for m=0 or n=0 it consists of equilateral triangles), e.g.Γ_1,1:0.638.6cm[scale=1.2] [style=line width=2.6pt] (-2,0) – (-1,1.732) – (1,1.732) – (2,0) – (1,-1.732) – (-1,-1.732) – (-2,0);[fill=black] (1,-1.732) circle (0.15); [fill=black] (-1,-1.732) circle (0.15); [fill=black] (2,0) circle (0.15); [fill=black] (1,1.732) circle (0.15); [fill=black] (-1,1.732) circle (0.15); [fill=black] (-2,0) circle (0.15); (-2,0) – (2,0);(-1,1.732) – (1,-1.732);(1,1.732) – (-1,-1.732);[fill=black] (0,0) circle (0.15); at (3,0) L_1-L_3;at (-3,0) L_3-L_1;at (1.7,2.1) L_2-L_3;at (-1.7,2.1) L_2-L_1;at (1.7,-2.1) L_1-L_2;at (-1.7,-2.1) L_3-L_2;[dashed] (0,0) – (0,1.732); [dashed] (0,0) – (-1.5,-0.866); [dashed] (0,0) – (1.5,-0.866);at (-0.866,-0.5) +;at (0.866,-0.5) +;at (0,1) +;at (0.25,1.2) L_2;at (1.1,-0.3) L_1;at (-1.1,-0.3) L_3;at (0,-0.4) 0;Γ_2,2:0.6314.2cm[scale=1.2] [fill=black] (2,3.464) circle (0.15); [fill=black] (3,1.732) circle (0.15); [fill=black] (4,0) circle (0.15); [fill=black] (3,-1.732) circle (0.15); [fill=black] (2,-3.464) circle (0.15); [fill=black] (0,-3.464) circle (0.15); [fill=black] (0,3.464) circle (0.15); [fill=black] (-2,3.464) circle (0.15); [fill=black] (-3,1.732) circle (0.15); [fill=black] (-4,0) circle (0.15); [fill=black] (-3,-1.732) circle (0.15); [fill=black] (-2,-3.464) circle (0.15);[fill=black] (1,-1.732) circle (0.15);(1,-1.732) circle (0.25); [fill=black] (-1,-1.732) circle (0.15);(-1,-1.732) circle (0.25); [fill=black] (2,0) circle (0.15);(2,0) circle (0.25); [fill=black] (1,1.732) circle (0.15);(1,1.732) circle (0.25); [fill=black] (-1,1.732) circle (0.15);(-1,1.732) circle (0.25); [fill=black] (-2,0) circle (0.15);(-2,0) circle (0.25);[fill=black] (0,0) circle (0.15);(0,0) circle (0.25); (-4,0) – (4,0);(-3,1.732) – (3,1.732);(-3,-1.732) – (3,-1.732);(-2,3.4641) – (2,3.4641);(-2,-3.4641) – (2,-3.4641); (-2,-3.4641) – (-4,0);(0,-3.4641) –(-3,1.732);(2,-3.4641) – (-2,3.4641);(3,-1.732) – (0,3.4641);(4,0) – (2,3.4641); (2,-3.4641) – (4,0);(0,-3.4641) –(3,1.732);(-2,-3.4641) – (2,3.4641);(-3,-1.732) – (0,3.4641);(-4,0) – (-2,3.4641);[style=line width=2.6pt] (-2,0) – (-1,1.732) – (1,1.732) – (2,0) – (1,-1.732) – (-1,-1.732) – (-2,0); [style=line width=2.6pt] (-4,0) – (-2,3.4641) – (2,3.4641) – (4,0) – (2,-3.4641) – (-2,-3.4641) – (-4,0); at (0,-0.5) 0;at (5.2,0) 2L_1-2L_3;at (-5.2,0) 2L_3-2L_1;at (2.5,3.85) 2L_2-2L_3;at (-2.5,3.85) 2L_2-2L_1;at (2.5,-3.9) 2L_1-2L_2;at (-2.5,-3.9) 2L_3-2L_2;at (0,3.85) 3L_2;at (0,-3.9) -3L_2;at (3.55,2.05) -3L_3;at (-3.5,-2.0) 3L_3;at (3.5,-2.0) 3L_1;at (-3.55,2.05) -3L_1;[dashed] (0,0) – (0,3.4641); [dashed] (0,0) – (-3,-1.732); [dashed] (0,0) – (3,-1.732);at (-0.866,-0.5) +;at (0.866,-0.5) +;at (0,1) +;at (0.25,1.2) L_2;at (1.1,-0.3) L_1;at (-1.1,-0.3) L_3;§.§ Integrable Representations of 𝔰𝔩(2,ℂ) In order to describe the partition functions of compact M-brane configurations we also need (certain) irreducible representations of 𝔰𝔩(2,ℂ) (i.e. the affine extension of 𝔰𝔩(2,ℂ)). In this appendix we give a brief review of the specific representations required and we refer the reader to <cit.> for a more rigorous and complete discussion of the representation theory of 𝔰𝔩(2,ℂ).The idea of constructing irreducible representation of affine algebras is similar to their finite counterparts: we start with a highest weight state λ and repeatedly subtract the positive roots, which act as ladder operators. In the notation introduced in appendix <ref>, the latter can be written in the formα_0=[2,-2,1] ,andα_1=[-2,2,0] .However, we have to take care that the action of a single one of the two roots (i.e. either α_0 or α_1) creates a (finite dimensional) irreducible representation of 𝔰𝔩(2,ℂ) of the type explained in appendix <ref> and in particular truncates after a finite number of steps. Starting with the highest weight state[Notice that the grade of the highest weight state has been chosen to be zero for convenience.] [n,n,0] for n∈ℕ ,(which is the case relevant for the discussion of partition functions of compact M-brane configurations in section <ref>) we obtain the following weights at grade 0[n,n,0] -α_1⟶ [n+2,n-2,0] -α_1⟶ [n+4,n-4,0] -α_1⟶ … -α_1⟶ [3n,-n,0] -α_1⟶ 0Notice that the weights {[n+2r,n-2r,0]|r=0,…, n} indeed form the irreducible representation Γ_n of 𝔰𝔩(2,ℂ). Similarly, acting with the root α_0 yields[n,n,0] -α_0⟶ [n-2,n+2,-1] -α_0⟶ [n-4,n+4,-2] -α_0⟶ … -α_0⟶ [-n,3n,-n] -α_0⟶ 0 , which equally forms the irreducible representation Γ_n. Acting with combinations of both roots, generates all states of the [n,n] highest weight weight representation. In contrast to the irreducible representations of 𝔰𝔩(2,ℂ), the highest weight representations [n,n] is infinite dimensional.Specifically, for n=1 we obtain the weights shown in table <ref>.Here, the Weyl orbit of the weight [1,1,0] is coloured in red. For the highest weight state [2,2,0] we find table <ref>.Finally, repeating the analysis for the highest weight state [4,4,0] we find <ref> § RECURSIVE RELATION FOR THE CONFIGURATION (N,M)=(2,1)In this appendix we provide more details on the recursive relation allowing to determine the coefficients c_k^(n)(τ,ϵ) (introduced in (<ref>)) from c_k-1^(n)(τ,ϵ) through the action of an operator R_+ as in (<ref>). We also supply as an example the explicit coefficients for k=1,…,5 for generic n.We only discuss the action of R_+ on c^(n)_k for k≤⌈n^2+12⌉, since all other cases are determined through (<ref>). As explained in section <ref>, the coefficients c_k^(n)(τ,ϵ) are essentially determined through a set of partitions of integers μ(k,n)=(μ_1(k,n),…,μ_ℓ(k,n)) of length 0≤ℓ≤ k withμ_a(k,n)≥μ_a+1(k,n) ,andμ_1(k,n)+2∑_a=2^ℓμ_a=2k ,for which c(μ(k,n))≠ 0. These partitions μ(n,k) can be obtained iteratively in k: indeed to obtain the former we begin with μ(k-1,n) (for which c(μ(k-1,n))≠ 0) and increase the μ_a(k-1,n) by either 1 or 2 in one of the following fashions * increase μ_1(k-1,n) by twoμ(n,k)=(μ_1(k-1,n)+2 ,μ_2(k-1,n) ,… ,μ_ℓ(k-1,n)) .* increase one of the μ_a(k-1,n) (for a>1) by 1μ(n,k)=(μ_1(k-1,n) ,μ_2(k-1,n) ,… ,μ_a(k-1,n)+1 ,… ,μ_ℓ(k-1,n)) .* add 1 at the end of μ(k-1,n)μ(n,k)=(μ_1(k-1,n) ,μ_2(k-1,n) ,… ,μ_ℓ(k-1,n) ,1) , For each of the resulting μ(k,n)=(μ_1(k,n) ,… ,μ_ℓ(k,n)), the coefficients c(μ(k,n)) are computed as followsc(μ(k,n))={[2 if [μ_a(k,n)-μ_a+1(k,n)≤ 2∀ a=1,…, ℓ-1and; μ_ℓ(k,n)≤ 2and; {[ μ_a(k,n)-μ_a+1(k,n) =1for at least one a∈{1,…,ℓ-1} or;μ_ℓ(n,k)=1 ]. ];1 if [ μ_a(k,n)-μ_a+1(k,n)= 0 or 2∀ a=1,…, ℓ-1 and;μ_ℓ(k,n)=2 ];0else ].To illustrate this procedure we can compute explicitly the first few steps of this iteration: * k=0: For k=0 the length of the partition is restricted by 0≤ℓ≤ 0, thus the only partition which may contribute is μ(0,n)=∅ for which c(∅)=1, thusc_0^(n)(τ,ϵ)=1 . * k=1: Starting from μ(0,n)=∅ following (<ref>) we have the only partition μ(1,n)=(2), for which c((2))=1. Furthermore, in order to satisfy the condition (<ref>) we need to choose r_1=2, such thatc_1^(n)(τ,ϵ)=θ(n)^2/θ(1)^2 .* k=2: Starting from μ(1,n)=(2), applying (<ref>) and (<ref>) we find two new partitions for k=1μ(2,n)={[(4) 4;(2,1)0.5cm2,1 ].However, for the first possibility we have c((4))=0 since μ_1(2,n)=4>2, while for the second possibility we have c((2,1))=2. Finally, in order to compute c_2^(n)(τ,ϵ) we need to find non-negative even integers r_1,2 (≤ 4) that satisfy (<ref>), i.e.2n^2+2(n^2+1)-r_1-4r_2=4n^2-8⟹r_1=r_2=2 .Therefore we havec_2^(n)(τ,ϵ)=2 θ(n-1)θ(n)^2θ(n+1)/θ(1)^2θ(2)^2 .* k=3: Starting from the partition μ(2,n)=(2,1) we find with (<ref>), (<ref>) and (<ref>) three new partitionsμ(3,n)={[(4,1)0.9cm4,1with c((4,1))=0 ,;(2,2)0.5cm2,2with c((2,2))=1 ,;(2,1,1)0.5cm2,1,1with c((2,1,1))=2 , ].Here the coefficient c((4,1))=0 since μ_1-μ_2=4-1>2. Finally, in order to calculate the coefficients c_r^(n)(ϵ,τ), we still need to supplement each of the two remaining partitions by suitable non-negative even integers (r_1,r_2,r_3) (≤ 4) which need to satisfy (<ref>)(2,2):2n^2+4(n^2+1)-r_1-4r_2-9r_3=6(n^2-3)⟹(r_1,r_2,r_3)=(4,2) ,(2,1,1):2n^2+2(n^2+1)+2(n^2+4)-r_1-4r_2-9r_3=6(n^2-3)⟹(r_1,r_2,r_3)=(2,2,2) . Therefore, we find c_3^(n)(τ,ϵ)=θ(n-1)^2θ(n)^2θ(n+1)^2/θ(1)^4θ(3)^2+2θ(n-2)θ(n-1)θ(n)^2θ(n+1)θ(n+2)/θ(1)^2θ(2)^2θ(3)^2 . We can tabulate the partitions in the following manner k μ(k,n) Young diagram of μ(k,n) c(μ(k,n)) r_i 0 ∅ — 1 (0)1 (1) 1 1 (2)2 (2,1) 0.5cm2,1 2 (2,2)3 (2,2) 0.5cm2,2 1 (4,2) (2,1,1) 0.5cm2,1,1 2 (2,2,2)4 (4,2) 0.9cm4,2 1 (2,4,2,0) (2,2,1) 0.5cm2,2,1 2 (4,2,0,2) (2,1,1,1) 0.5cm2,1,1,1 2 (2,2,2,2)5 (2,2,2) 0.5cm2,2,2 1 (4,4,0,0,2) (4,2,1) 0.9cm4,2,1 2 (4,2,2,4,0) (2,2,1,1) 0.5cm2,2,1,1 2 (4,2,2,0,2) (2,1,1,1,1) 0.5cm2,1,1,1,1 2 (2,2,2,2,2)which give rise to the coefficients c_k^(n)(ϵ,τ) in (<ref>). We have furthermore checked, that the algorithm described above correctly reproduces all coefficients c_k^(n)(ϵ,τ) up to k=8. and we therefore conjecture that it holds for generic k∈ℕ. § EXPANSION COEFFICIENTSIn this appendix we tabulate some of the expansion coefficients that appear for various partition functions.§.§ Non-compact Brane Configuration (N,M)=(3,1)We list the first few coefficients ϕ^n_[c_1,c_2] appearing in the expansion (<ref>) of 𝒵^line_3,1(τ,t_f_1,t_f_2,m=nϵ,ϵ,-ϵ)ϕ^n_[n^2,n^2]=1 , ϕ^n_[n^2-2,n^2+1]=ϕ^n_[n^2+1,n^2-2]=θ(n)^2/θ(1)^2 , ϕ^n_[n^2-4,n^2+2]=ϕ^n_[n^2+2,n^2-4]=2 θ(n-1)θ(n)^2θ(n+1)/θ(1)^2θ(2)^2 , ϕ^n_[n^2-6,n^2+3]=ϕ^n_[n^2+3,n^2-6]=θ(n-1)^2θ(n)^2θ(n+1)^2/θ(1)^4θ(3)^2+2 θ(n-2)θ(n-1)θ(n)^2θ(n+1)θ(n+2)/θ(1)^2θ(2)^2θ(3)^2 , ϕ^n_[n^2-1,n^2-1]=θ(n-1)θ(n)^2θ(n+1)/θ(1)^4 , ϕ^n_[n^2-3,n^2]=ϕ^n_[n^2,n^2-3]=θ(n-2)θ(n)^3θ(n+1)^2+θ(n-1)^2θ(n)^3θ(n+2)/θ(1)^4θ(2)^2 , ϕ^n_[n^2-5,n^2+1]=ϕ^n_[n^2+1,n^2-5]=θ(n-2)θ(n-1)θ(n)^4θ(n+1)θ(n+2)/θ(1)^6θ(3)^2+θ(n-3)θ(n-1)θ(n)^3θ(n+1)^2θ(n+2)+θ(n-2)θ(n-1)^2θ(n)^3θ(n+1)θ(n+3)/θ(1)^4θ(2)^2θ(3)^2 , ϕ^n_[n^2-7,n^2+2]=ϕ^n_[n^2+2,n^2-7]=θ(n-2)θ(n-1)^2θ(n)^4θ(n+1)^2θ(n+2)/θ(1)^4θ(2)^4θ(3)^2+θ(n-3)θ(n-1)^2θ(n)^4θ(n+1)θ(n+2)^2+θ(n-2)^2θ(n-1)θ(n)^4θ(n+1)^2θ(n+3)/θ(1)^6θ(2)^2θ(4)^2+θ(n-4)θ(n-2)θ(n-1)θ(n)^3θ(n+1)^2θ(n+2)θ(n+3)/θ(1)^4θ(2)^2θ(3)^2θ(4)^2+θ(n-3)θ(n-2)θ(n-1)^2θ(n)^3θ(n+1)θ(n+2)θ(n+4)/θ(1)^4θ(2)^2θ(3)^2θ(4)^2 , ϕ^n_[n^2-9,n^2+3]=ϕ^n_[n^2+3,n^2-9]=θ(n-3)θ(n-2)θ(n-1)^2θ(n)^4θ(n+1)^2θ(n+2)θ(n+3)/θ(1)^6θ(2)^4θ(5)^2+θ(n-3)θ(n-1)^3θ(n)^4θ(n+1)^2θ(n+2)^2+θ(n-2)^2θ(n-1)^2θ(n)^4θ(n+1)^3θ(n+3)/θ(1)^6θ(2)^2θ(3)^2θ(4)^2+θ(n-4)θ(n-2)θ(n-1)^2θ(n)^4θ(n+1)θ(n+2)^2θ(n+3)/θ(1)^6θ(2)^2θ(3)^2θ(5)^2+θ(n-3)θ(n-2)^2θ(n-1)θ(n)^4θ(n+1)^2θ(n+2)θ(n+4)/θ(1)^6θ(2)^2θ(3)^2θ(5)^2+θ(n-5)θ(n-3)θ(n-2)θ(n-1)θ(n)^3θ(n+1)^2θ(n+2)θ(n+3)θ(n+4)/θ(1)^4θ(2)^2θ(3)^2θ(4)^2θ(5)^2+θ(n-4)θ(n-3)θ(n-2)θ(n-1)^2θ(n)^3θ(n+1)θ(n+2)θ(n+3)θ(n+5)/θ(1)^4θ(2)^2θ(3)^2θ(4)^2θ(5)^2 , ϕ^n_[n^2-2,n^2-2]=θ(n-3)θ(n)^4θ(n+1)^3+θ(n-1)^3θ(n)^4θ(n+3)/θ(1)^4θ(2)^4+2θ(n-2)θ(n-1)^2θ(n)^2θ(n+1)^2θ(n+2)/θ(1)^4θ(2)^4 , ϕ^n_[n^2-4,n^2-1]=ϕ^n_[n^2-1,n^2-4]=θ(n-3)θ(n-2)θ(n-1)θ(n)^3θ(n+1)^2θ(n+2)^2/θ(1)^4θ(2)^4θ(3)^2+θ(n-2)^2θ(n-1)^2θ(n)^3θ(n+1)θ(n+2)θ(n+3)/θ(1)^4θ(2)^4θ(3)^2+θ(n-4)θ(n-1)θ(n)^4θ(n+1)^3θ(n+2)+θ(n-2)θ(n-1)^3θ(n)^4θ(n+1)θ(n+4)/θ(1)^4θ(2)^4θ(3)^2+θ(n-3)θ(n-1)^2θ(n)^3θ(n+1)^3θ(n+2)+θ(n-2)θ(n-1)^3θ(n)^3θ(n+1)^2θ(n+3)/θ(1)^6θ(2)^2θ(3)^2 , ϕ^n_[n^2-3,n^2-3]=θ(n-3)θ(n-1)^4θ(n)^2θ(n+1)^4θ(n+3)/θ(1)^8θ(3)^4+2θ(n-3)θ(n-2)^2θ(n-1)^2θ(n)^2θ(n+1)^2θ(n+2)^2θ(n+3)/θ(1)^4θ(2)^4θ(3)^4+θ(n-5)θ(n-1)^2θ(n)^4θ(n+1)^3θ(n+2)^2+θ(n-2)^2θ(n-1)^3θ(n)^4θ(n+1)^2θ(n+5)/θ(1)^4θ(2)^4θ(3)^4+2θ(n-4)θ(n-2)θ(n-1)θ(n)^4θ(n+1)^3θ(n+2)^2/θ(1)^6θ(2)^2θ(3)^4+2θ(n-2)^2θ(n-1)^3θ(n)^4θ(n+1)θ(n+2)θ(n+4)/θ(1)^6θ(2)^2θ(3)^4 . ϕ^n_[n^2-4,n^2-4]=θ(n-3)θ(n-2)^2θ(n-1)^3θ(n)^4θ(n+1)^3θ(n+2)^2θ(n+3)/θ(1)^4θ(2)^8θ(3)^4+2 θ(n-4)θ(n-3)θ(n-1)^3θ(n)^5θ(n+1)^3θ(n+2)^2θ(n+3)/θ(1)^6θ(2)^6θ(3)^2θ(4)^2+2 θ(n-3)θ(n-2)^2θ(n-1)^3θ(n)^5θ(n+1)^3θ(n+3)θ(n+4)/θ(1)^6θ(2)^6θ(3)^2θ(4)^2+θ(n-5)θ(n-2)^2θ(n-1)^2θ(n)^4θ(n+1)^4θ(n+2)^2θ(n+3)/θ(1)^8θ(2)^4θ(4)^4+θ(n-3)θ(n-2)^2θ(n-1)^4θ(n)^4θ(n+1)^2θ(n+2)^2θ(n+5)/θ(1)^8θ(2)^4θ(4)^4+2 θ(n-5)θ(n-4)θ(n-1)^2θ(n)^5θ(n+1)^4θ(n+2)^2θ(n+3)/θ(1)^4θ(2)^6θ(3)^4θ(4)^2+2 θ(n-3)θ(n-2)^2θ(n-1)^4θ(n)^5θ(n+1)^2θ(n+4)θ(n+5)/θ(1)^4θ(2)^6θ(3)^4θ(4)^2+2 θ(n-6)θ(n-3)θ(n-1)^3θ(n)^4θ(n+1)^3θ(n+2)^3θ(n+3)/θ(1)^6θ(2)^4θ(3)^2θ(4)^4+2 θ(n-3)θ(n-2)^3θ(n-1)^3θ(n)^4θ(n+1)^3θ(n+3)θ(n+6)/θ(1)^6θ(2)^4θ(3)^2θ(4)^4+2 θ(n-5)θ(n-3)θ(n-2)^2θ(n-1)θ(n)^4θ(n+1)^3θ(n+2)^2θ(n+3)^2/θ(1)^6θ(2)^4θ(3)^2θ(4)^4+2 θ(n-3)^2θ(n-2)^2θ(n-1)^3θ(n)^4θ(n+1)θ(n+2)^2θ(n+3)θ(n+5)/θ(1)^6θ(2)^4θ(3)^2θ(4)^4+θ(n-7)θ(n-2)^2θ(n-1)^2θ(n)^4θ(n+1)^3θ(n+2)^2θ(n+3)^2/θ(1)^4θ(2)^4θ(3)^4θ(4)^4+θ(n-3)^2θ(n-2)^2θ(n-1)^3θ(n)^4θ(n+1)^2θ(n+2)^2θ(n+7)/θ(1)^4θ(2)^4θ(3)^4θ(4)^4+2 θ(n-4)θ(n-2)^2θ(n-1)^4θ(n)^2θ(n+1)^4θ(n+2)^2θ(n+4)/θ(1)^8θ(2)^4θ(4)^4+2 θ(n-4)θ(n-3)^2θ(n-2)^2θ(n-1)^2θ(n)^2θ(n+1)^2θ(n+2)^2θ(n+3)^2θ(n+4)/θ(1)^4θ(2)^4θ(3)^4θ(4)^4 .§.§ Compact Brane Configuration (N,M)=(2,1)We list the first few coefficients ϕ^n_[c_1,c_2,l] appearing in the expansion (<ref>) of 𝒵_2,1(τ,t_f_1,t_f_2,m=nϵ,ϵ,-ϵ)ϕ^n_[n^2,n^2,0]=1 ,ϕ^n_[n^2+2,n^2-2,0]=ϕ^n_[n^2-2,n^2+2,-1]=θ(n)^2/θ(1)^2 ,ϕ^n_[n^2+4,n^2-4,0]=ϕ^n_[n^2-4,n^2+4,-2]=2θ(n-1)θ(n)^2θ(n+1)/θ(1)^2θ(2)^2 ,ϕ^n_[n^2+6,n^2-6,0]=ϕ^n_[n^2-6,n^2+6,-3]=θ(n-1)^2θ(n)^2θ(n+1)^2/θ(1)^4θ(3)^2+2θ(n-2)θ(n-1)θ(n)^2θ(n+1)θ(n+2)/θ(1)^2θ(2)^2θ(3)^2 ,ϕ^n_[n^2+8,n^2-8,0]=ϕ^n_[n^2-8,n^2+8,-4]=θ(n-1)^2θ(n)^4θ(n+1)^2/θ(1)^2θ(2)^4θ(3)^2+2θ(n-2)θ(n-1)^2θ(n)^2θ(n+1)^2θ(n+2)/θ(1)^4θ(2)^2θ(4)^2+2θ(n-3)θ(n-2)θ(n-1)θ(n)^2θ(n+1)θ(n+2)θ(n+3)/θ(1)^2θ(2)^2θ(3)^2θ(4)^2 ,ϕ^n_[n^2+10,n^2-10,0]=θ(n-2)^2θ(n-1)^2θ(n)^2θ(n+1)^2θ(n+2)^2/θ(1)^4θ(2)^4θ(5)^2+2θ(n-2)θ(n-1)^2θ(n)^4θ(n+1)^2θ(n+2)/θ(1)^4θ(2)^2θ(3)^2θ(4)^2+2θ(n-3)θ(n-2)θ(n-1)^2θ(n)^2θ(n+1)^2θ(n+2)θ(n+3)/θ(1)^4θ(2)^2θ(3)^2θ(5)^2+2θ(n-4)θ(n-3)θ(n-2)θ(n-1)θ(n)^2θ(n+1)θ(n+2)θ(n+3)θ(n+4)/θ(1)^2θ(2)^2θ(3)^2θ(4)^2θ(5)^2 , ϕ^n_[n^2,n^2,-1]=θ(n-1)^2θ(n+1)^2/θ(1)^4 ,ϕ^n_[n^2+2,n^2-2,-1]=ϕ^n_[n^2-2,n^2+2,-2]=2θ(n-2)θ(n-1)θ(n)^2θ(n+1)θ(n+2)/θ(1)^4θ(2)^2ϕ^n_[n^2+4,n^2-4,-1]=ϕ^n_[n^2-4,n^2+4,-3]=θ(n-2)^2θ(n)^4θ(n+2)^2/θ(1)^6θ(3)^2+2θ(n-3)θ(n-1)^2θ(n)^2θ(n+1)^2θ(n+3)/θ(1)^4θ(2)^2θ(3)^2 ,ϕ^n_[n^2+6,n^2-6,-1]=ϕ^n_[n^2-6,n^2+6,-4]=θ(n-2)^2θ(n-1)^2θ(n)^2θ(n+1)^2θ(n+2)^2/θ(1)^4θ(2)^4θ(3)^2+2θ(n-3)θ(n-2)θ(n-1)θ(n)^4θ(n+1)θ(n+2)θ(n+3)/θ(1)^6θ(2)^2θ(4)^2+2θ(n-4)θ(n-2)θ(n-1)^2θ(n)^2θ(n+1)^2θ(n+2)θ(n+4)/θ(1)^4θ(2)^2θ(3)^2θ(4)^2 , ϕ^n_[n^2,n^2,-2]=2θ(n-2)^2θ(n-1)^2θ(n+1)^2θ(n+2)^2/θ(1)^4θ(2)^4+2θ(n-3)θ(n-1)θ(n)^4θ(n+1)θ(n+3)/θ(1)^4θ(2)^4 ,ϕ^n_[n^2+2,n^2-2,-2]=ϕ^n_[n^2-2,n^2+2,-3]=2θ(n-4)θ(n-1)^2θ(n)^4θ(n+1)^2θ(n-4)/θ(1)^4θ(2)^4θ(3)^2+2θ(n-3)θ(n-2)θ(n-1)^2θ(n)^2θ(n+1)^2θ(n+2)θ(n+3)/θ(1)^6θ(2)^2θ(3)^2+2θ(n-3)θ(n-2)^2θ(n-1)θ(n)^2θ(n+1)θ(n+2)^2θ(n+3)/θ(1)^4θ(2)^4θ(3)^2) ,ϕ^n_[n^2+4,n^2-4,-2]=ϕ^n_[n^2-4,n^2+4,-4]=2θ(n-3)θ(n-2)^2θ(n-1)^2θ(n)^2θ(n+1)^2θ(n+2)^2θ(n+3)/θ(1)^4θ(2)^6θ(3)^2+2θ(n-3)^2θ(n-2)θ(n-1)θ(n)^4θ(n+1)θ(n+2)θ(n+3)^2/θ(1)^6θ(2)^4θ(4)^2+2θ(n-4)θ(n-2)θ(n-1)^3θ(n)^2θ(n+1)^3θ(n+2)θ(n+4)/θ(1)^6θ(2)^4θ(4)^2+2θ(n-4)θ(n-3)θ(n-2)θ(n-1)^2θ(n)^2θ(n+1)^2θ(n+2)θ(n+3)θ(n+4)/θ(1)^4θ(2)^4θ(3)^2θ(4)^2+2θ(n-5)θ(n-2)θ(n-1)^2θ(n)^4θ(n+1)^2θ(n+2)θ(n+5)/θ(1)^4θ(2)^4θ(3)^2θ(4)^2 , § CONTRIBUTING PARTITIONS FOR (N,M)=(3,1) AND M=ΕTo analyse the restrictions on the sum over partitions in the case N=3, we consider two generic Young diagrams (ν_1,ν_2) and try to restrict their forms by analysing their contributions to the partition function (<ref>) * contribution of the (ν_2,1-1)th box in the first row of ν_21.6cm6, 5, 3,21.8cm7, 5, 4,4,3,1 *[*(black)]5+1,0,0,0,0,0 i_2=1 , j_2=ν_2,1-1For this box, we have z^(2)_1,ν_2,1-1=ϵ(ν_2,1-1-(ν_2,1-1))=0, therefore, the Young diagram ν_2 is not allowed to have a second column, but is restricted to consist of a single column. * contribution of the last box in the second row of ν_11.6cm6, 5, 3,2*[*(black)]0,4+1,0,00.3cm1, 1, 1,1,1,1 i_1=2 , j_1=ν_1,2such that v^(1)_2,ν_1,2=-ϵ(ν_1,2-2-ν_1,2+2)=0, therefore, the Young diagram ν_1 cannot have a second row and is restricted to consist of a single row.* restrictions on the form of ν_1Here we have to distinguish four different possibilities * ν_1=∅: in this case we have no restriction on the form of ν_2* ν_1=1 : in this case we have z^(1)_1,1=ϵ(1+ν_2,1^t-1-1), which restricts ν_2 to be ν_1=1 ,ν_2∈{∅,0.3cm1,1,0.3cm1,1,1,…} ,and in particular excludes (ν_1,ν_2)=(0.3cm1,0.3cm1).* ν_1=2 : in this case we have z^(1)_1,1=ϵ(2+ν_2,1^t-1-1) and z_1,2^(1)=ϵ(2-1-2)≠ 0, which restricts the form of ν_2ν_1=2 ,ν_2∈{0.3cm1,0.3cm1,1,0.3cm1,1,1,…} ,which particularly excludes (ν_1,ν_2)=(0.6cm2,∅) * ν_1=5 , with length ν_1,1≥ 3: in this case we consider the contribution of the box ν_1,1-1 in the first row of ν_11.4cm5*[*(black)]3+10.3cm1, 1, 1,1,1,1 i_1=1 , j_1=ν_1,1-1≥ 2 ,for which we have z^(1)_1,ν-1,1-1=ϵ(ν_1,1-1-(ν_1,1-1))=0. Therefore, the length ν_1,1<3 is restricted.Summarising, we are left with the following three sets of configurations(ν_1,ν_2)∈{(∅,∅) ,(∅,0.3cm1) ,(∅,0.3cm1,1) ,(∅,0.3cm1,1,1) ,…} ,(ν_1,ν_2)∈{(1 ,∅) ,(1 ,0.3cm1,1) ,(1 ,0.3cm1,1,1) ,…} ,(ν_1,ν_2)∈{(2 ,0.3cm1) ,(2 ,0.3cm1,1) ,(2 ,0.3cm1,1,1) ,…} .* restrictions on the form of ν_2We can further constrain the three classes of contributions (<ref>) – (<ref>): * ν_1=∅ (see (<ref>)): in this case we consider the first box in the second row of ν_2 ∅0.3cm1, 1, 1,1,1,1*[*(black)]0,0+1,0,0 i_2=2 , j_2=1 ,which yields v^(2)_2,1=-ϵ(ν_2,1+0-2-1+2)=ν_2,1-1 and therefore only ν_2,1=0 contributes, which restricts (<ref>) to the following cases(ν_1,ν_2)∈{(∅,∅) ,(∅,0.3cm1)} ,* ν_1=1 (see (<ref>)): the case (1 ,∅) contributes to the partition function, while for the cases ν_2,2≠ 0 we consider the first box in the third row of ν_2 0.3cm10.3cm1, 1, 1,1,1,1*[*(black)]0,0,0+1,0 i_2=3 , j_2=1 ,which yields v^(2)_3,1=-ϵ(ν_2,1+1-3-1+2)=-ϵ(ν_2,1-1) and therefore only ν^2_3,1=0 contributes, which restricts (<ref>) to the following contributions(ν_1,ν_2)∈{(1 ,∅) ,(1 ,0.3cm1,1) } .* ν_1=2 (see (<ref>)): in this case we consider the first box in the third row of ν_20.6cm20.3cm1, 1, 1,1,1,1*[*(black)]0,0,0+1,0 i_2=3 , j_2=1 ,which yields v^(2)_3,1=-ϵ(ν_2,1+1-3-1+2)=-ϵ(ν_2,1-1) and therefore only ν_2,1=0 contributes, which restricts (<ref>) to the following contributions(ν_1,ν_2)∈{(2 ,0.3cm1) ,(2 ,0.3cm1,1)} 99 Haghighat:2013gba B. Haghighat, A. Iqbal, C. Kozaz, G. Lockhart and C. Vafa, M-Strings, Commun. Math. Phys.334 (2015) no.2,779 doi:10.1007/s00220-014-2139-1 [arXiv:1305.6322 [hep-th]]. Haghighat:2013tka B. Haghighat, C. Kozcaz, G. Lockhart and C. Vafa, Orbifolds of M-strings, Phys. Rev. D 89 (2014) 4,046003 [arXiv:1310.1185 [hep-th]]. Hohenegger:2013ala S. Hohenegger, A. Iqbal, M-strings, elliptic genera and N=4 string amplitudes, Fortsch. Phys. 62, 155 (2014), [arXiv:1310.1325 [hep-th]].Haghighat:2015coa B. Haghighat, From strings in 6d to strings in 5d, JHEP 1601 (2016) 062 doi:10.1007/JHEP01(2016)062 [arXiv:1502.06645 [hep-th]]. Hohenegger:2015cba S. Hohenegger, A. Iqbal and S. J. Rey, M-strings, monopole strings, and modular forms, Phys. Rev. D 92 (2015) no.6,066005 doi:10.1103/PhysRevD.92.066005 [arXiv:1503.06983 [hep-th]]. Haghighat:2015ega B. Haghighat, S. Murthy, C. Vafa and S. Vandoren, F-Theory, Spinning Black Holes and Multi-string Branches, JHEP 1601 (2016) 009 doi:10.1007/JHEP01(2016)009 [arXiv:1509.00455 [hep-th]]. Ahmed:2017hfr A. Ahmed, S. Hohenegger, A. Iqbal and S. J. Rey, Bound States of Little Strings and Symmetric Orbifold CFTs, arXiv:1706.04425 [hep-th]. Indirect20 An incomplete list includes: E. Witten, Five-branes and M theory on an orbifold, Nucl. Phys. B 463 (1996) 383 [hep-th/9512219]∙N. Seiberg, New theories in six-dimensions and matrix description of M theory on T**5 and T**5 / Z(2), Phys. Lett. B 408 (1997) 98 [hep-th/9705221]∙O. Aharony, M. Berkooz, S. Kachru, N. Seiberg and E. Silverstein, Matrix description of interacting theories in six-dimensions, Adv. Theor. Math. Phys.1 (1998) 148 [hep-th/9707079] ∙O. Aharony, M. Berkooz and N. Seiberg, Light cone description of (2,0) superconformal theories in six-dimensions, Adv. Theor. Math. Phys.2 (1998) 119 [hep-th/9712117]∙R. G. Leigh and M. Rozali, The Large N limit of the (2,0) superconformal field theory, Phys. Lett. B 431 (1998) 311 [hep-th/9803068]∙O. Ganor and L. Motl, Equations of the (2,0) theory and knitted five-branes, JHEP 9805 (1998) 009 [hep-th/9803108]∙J. A. Harvey, R. Minasian and G. W. Moore, NonAbelian tensor multiplet anomalies, JHEP 9809 (1998) 004 [hep-th/9808060]∙K. A. Intriligator, Anomaly matching and a Hopf-Wess-Zumino term in 6d, N=(2,0) field theories, Nucl. Phys. B 581 (2000) 257 [hep-th/0001205] and references therein.Gaiotto L. Alday, D. Gaiotto, Y. Tachikawa, Liouville Correlation Functions from Four-dimensional Gauge Theories, Lett.Math.Phys.91:167-197,2010 doi: 10.1007/s11005-010-0369-5 arXiv:0906.3219 [hep-th].Dijkgraaf:2009pc R. Dijkgraaf and C. Vafa, Toda Theories, Matrix Models, Topological Strings, and N=2 Gauge Systems, arXiv:0909.2453 [hep-th]. Vafa:1996xn C. Vafa, Evidence for F theory, Nucl. Phys. B 469 (1996) 403 doi:10.1016/0550-3213(96)00172-1 [hep-th/9602022].Heckman:2013pva J. J. Heckman, D. R. Morrison and C. Vafa, On the Classification of 6D SCFTs and Generalized ADE Orbifolds, JHEP 1405 (2014) 028Erratum: [JHEP 1506 (2015) 017] doi:10.1007/JHEP06(2015)017, 10.1007/JHEP05(2014)028 [arXiv:1312.5746 [hep-th]].DelZotto:2014hpa M. Del Zotto, J. J. Heckman, A. Tomasiello and C. Vafa, 6d Conformal Matter, JHEP 1502 (2015) 054 doi:10.1007/JHEP02(2015)054 [arXiv:1407.6359 [hep-th]]. Heckman:2014qba J. J. Heckman, More on the Matter of 6D SCFTs, Phys. Lett. B 747 (2015) 73 doi:10.1016/j.physletb.2015.05.046 [arXiv:1408.0006 [hep-th]]. Haghighat:2014vxa B. Haghighat, A. Klemm, G. Lockhart and C. Vafa, Strings of Minimal 6d SCFTs, Fortsch. Phys.63 (2015) 294 doi:10.1002/prop.201500014 [arXiv:1412.3152 [hep-th]]. Heckman:2015bfa J. J. Heckman, D. R. Morrison, T. Rudelius and C. Vafa, Atomic Classification of 6D SCFTs, Fortsch. Phys.63 (2015) 468 doi:10.1002/prop.201500024 [arXiv:1502.05405 [hep-th]]. Choi:2017vtd K. S. Choi and S. J. Rey, E(lementary) Strings in Six-Dimensional Heterotic F-Theory, arXiv:1706.05353 [hep-th]. Aharony:1997bh O. Aharony, A. Hanany and B. Kol, Webs of (p,q) five-branes, five-dimensional field theories and grid diagrams, JHEP 9801 (1998) 002 doi:10.1088/1126-6708/1998/01/002 [hep-th/9710116].Moore:1997dj G. W. Moore, N. Nekrasov and S. Shatashvili, Integrating over Higgs branches, Commun. Math. Phys.209 (2000) 97 doi:10.1007/PL00005525 [hep-th/9712241]. Lossev:1997bz A. Lossev, N. Nekrasov and S. L. Shatashvili, Testing Seiberg-Witten solution, In *Cargese 1997, Strings, branes and dualities* 359-372 [hep-th/9801061]. Shabbir:2016nhp A. Iqbal and K. Shabbir, M-strings and transverse orbifold, Eur. Phys. J. C 76 (2016) no.11,611 doi:10.1140/epjc/s10052-016-4453-6 [arXiv:1609.09232 [hep-th]]. Bhardwaj:2015oru L. Bhardwaj, M. Del Zotto, J. J. Heckman, D. R. Morrison, T. Rudelius and C. Vafa, F-theory and the Classification of Little Strings, Phys. Rev. D 93 (2016) no.8,086002 doi:10.1103/PhysRevD.93.086002 [arXiv:1511.05565 [hep-th]]. Hohenegger:2015btj S. Hohenegger, A. Iqbal and S. J. Rey, Instanton-monopole correspondence from M-branes on 𝕊^1 and little string theory, Phys. Rev. D 93 (2016) no.6,066016 doi:10.1103/PhysRevD.93.066016 [arXiv:1511.02787 [hep-th]]. Hohenegger:2016eqy S. Hohenegger, A. Iqbal and S. J. Rey, Self-Duality and Self-Similarity of Little String Orbifolds, Phys. Rev. D 94 (2016) no.4,046006 doi:10.1103/PhysRevD.94.046006 [arXiv:1605.02591 [hep-th]].Hohenegger:2016yuv S. Hohenegger, A. Iqbal and S. J. Rey, Dual Little Strings from F-Theory and Flop Transitions, arXiv:1610.07916 [hep-th]. Witten:1995zh E. Witten, Some comments on string dynamics, in Proceedings of Strings '95: Future Perspectives in String Theory USC, Los Angeles, March 13-18, 1995, edited by I. Bars et.al. (World Scientific, Singapore, 1996), 501-523 [hep-th/9507121].Aspinwall:1997ye P. S. Aspinwall and D. R. Morrison, Point - like instantons on K3 orbifolds, Nucl. Phys. B 503 (1997) 533[hep-th/9705104]. Seiberg:1997zk N. Seiberg, New theories in six-dimensions and matrix description of M theory on T^5 and T^5 / Z_2, Phys. Lett. B 408 (1997) 98[hep-th/9705221]. Intriligator:1997dh K. A. Intriligator, New string theories in six-dimensions via branes at orbifold singularities, Adv. Theor. Math. Phys.1 (1998) 271 [hep-th/9708117]. Hanany:1997gh A. Hanany and A. Zaffaroni, Branes and six-dimensional supersymmetric theories, Nucl. Phys. B 529 (1998) 180[hep-th/9712145]. Brunner:1997gf I. Brunner and A. Karch, Branes at orbifolds versus Hanany Witten in six-dimensions, JHEP 9803 (1998) 003[hep-th/9712143].Aharony:1999ks O. Aharony, A Brief Review of 'Little String Theories', Class. Quant. Grav.17 (2000) 929 [hep-th/9911147]. Kutasov:2001uf D. Kutasov, Introduction to Little String Theory, in Superstrings and related matters. Proceedings, Spring School, Trieste, Italy, April 2-10, 2001, edited by C. Bachas et. al. (ICTP, Trieste, 2002) pp. 165-209, 2001. Harvey:2014cva J. A. Harvey, S. Murthy and C. Nazaroglu, ADE Double Scaled Little String Theories, Mock Modular Forms and Umbral Moonshine, JHEP 1505 (2015) 126 doi:10.1007/JHEP05(2015)126 [arXiv:1410.6174 [hep-th]]. Aganagic:1999fe M. Aganagic, A. Karch, D. Lust and A. Miemiec, Mirror symmetries for brane configurations and branes at singularities, Nucl. Phys. B 569 (2000) 277 doi:10.1016/S0550-3213(99)00608-2 [hep-th/9903093]. Aganagic:2003db M. Aganagic, A. Klemm, M. Marino and C. Vafa, The Topological vertex, Commun. Math. Phys.254 (2005) 425 doi:10.1007/s00220-004-1162-z [hep-th/0305132].Hollowood:2003cv T. J. Hollowood, A. Iqbal and C. Vafa, Matrix models, geometric engineering and elliptic genera, JHEP 0803 (2008) 069 doi:10.1088/1126-6708/2008/03/069 [hep-th/0310272].Iqbal:2007ii A. Iqbal, C. Kozcaz and C. Vafa, The Refined topological vertex, JHEP 0910 (2009) 069 doi:10.1088/1126-6708/2009/10/069 [hep-th/0701156].Nekrasov:2002qd N. A. Nekrasov, Seiberg-Witten prepotential from instanton counting, Adv. Theor. Math. Phys.7 (2003) no.5,831 doi:10.4310/ATMP.2003.v7.n5.a4 [hep-th/0206161]. Kimura:2015rgi T. Kimura and V. Pestun, Quiver W-algebras, arXiv:1512.08533 [hep-th]. Kimura:2016dys T. Kimura and V. Pestun, Quiver elliptic W-algebras, arXiv:1608.04651 [hep-th]. Kimura:2017auj T. Kimura, H. Mori and Y. Sugimoto, Refined geometric transition and qq-characters, arXiv:1705.03467 [hep-th].Kimura:2017hez T. Kimura and V. Pestun, Fractional Quiver W-algebras, arXiv:1705.04410 [hep-th]. Nekrasov:2015wsu N. Nekrasov, BPS/CFT correspondence: non-perturbative Dyson-Schwinger equations and qq-characters, JHEP 1603 (2016) 181 doi:10.1007/JHEP03(2016)181 [arXiv:1512.05388 [hep-th]]. Nekrasov:2016qym N. Nekrasov, BPS/CFT correspondence II: Instantons at crossroads, Moduli and Compactness Theorem, arXiv:1608.07272 [hep-th]. Nekrasov:2016ydq N. Nekrasov, BPS/CFT Correspondence III: Gauge Origami partition function and qq-characters, arXiv:1701.00189 [hep-th]. Kac V. Kac, Infinite dimensional Lie algebras. Cambridge University Press, 1990.DiFrancesco P. Di Francesco, P. Mathieu and D. Senechal, Conformal Field Theory, Springer, 1997 Persson:2010ms D. Persson, Arithmetic and Hyperbolic Structures in String Theory, arXiv:1001.3154 [hep-th].Fuchs Jurgen Fuchs, Affine Lie algebras and Quantum groups, Cambridge University Press, 1992 FultonHarris William Fulton and Joe Harris, Representation Theory: A First Course, Springer New York, ISBN: 978-3-540-00539-1.doi: 10.1007/978-1-4612-0979-9 ConwaySloane J.H. Conway, N.J.A. Sloane, Sphere Packings, Lattices and Groups, Springer Verlag 1988.
http://arxiv.org/abs/1706.08750v1
{ "authors": [ "Brice Bastian", "Stefan Hohenegger" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20170627092809", "title": "Five-Brane Webs and Highest Weight Representations" }
Skating on Slippery Ice J.M.J. van Leeuwen ... ======================= The Cosmic Ray (CR) physics has entered a new era driven by high precision measurements coming from direct detection (especially AMS-02 and PAMELA) and also from gamma-ray observations (Fermi-LAT).In this review we focus our attention on how such data impact the understanding of the supernova remnant paradigm for the origin of CRs. In particular we discuss advancement in the field concerning the three main stages of the CR life: the acceleration process, the escape from the sources and the propagation throughout the Galaxy. We show how the new data reveal a phenomenology richest than previously thought that could even challenge the current understanding of CR origin.§ INTRODUCTION Understanding the origin of Galactic cosmic rays (CR) is a puzzle consisting of three main pieces that must fit in with each other to produce a coherent picture and can be summarized as follow: 1) finding the main sources able to accelerate particles up to the highest energy we observe; 2) understanding how particles escape from their sources and are released into the interstellar medium (ISM);3) understanding how particles propagate through the Galaxy before reaching the Earth.In this concise review I will outline some of the many recent observational findings that, to some extend, are challenging the current understanding of the Galactic CR origin. I will list those discoveries in connection to the three pillars outlined above.The main sources of Galactic CRs are thought to be the shocks produced by the explosion of SNe, mainly because of energetic reasons, but also because the presence of accelerated particles is unequivocally inferred from the non-thermal radiation emitted by supernova remnants (SNR). The shock acceleration process is maybe the most studied mechanism related to the CR physics <cit.>. In  <ref> we will focus our attention mainly on the particle spectrum and the maximum energy predicted in the framework of Diffusive Shock Acceleration (DSA), showing how recent observations of SNRs in the γ-ray band are challenging the theoretical predictions.The final spectrum released into the Galaxy by SNRs is determined by the escaping process and could be significantly different from the spectrum accelerated at the shock. Compared to the acceleration phase, the process of particle escaping has received much little attention, partially because the theoretical difficulty in describing how particles can reach their maximum energy and how they propagate in the transition region between the shock and the undisturbed ISM, where the average level of Galactic CRs dominates over the flux coming from a single source. In addition there are observational difficulties in detecting signals from this transition region with a noticeable exception: the detection of γ-ray emission from molecular clouds (MC) locate close enough to SNRs. The interesting it such kind of systems has been renewed recently thanks to the measurements of the ionization degree in these same clouds which can provide information on the CR spectrum below the threshold of the γ-rays production, paving the way to the study of CR spectrum from MeV up to TeV energies, and possibly beyond. This argument will be discussed in  <ref>.Finally, the study of the propagation mechanism through the Galaxy is receiving a lot of attention in the very recent years, thanks to the impressive amount of data coming from direct measurements of the CR flux and to information coming from the diffuse Galactic γ-ray background. The interpretation of these new data probably requires a revision of the current understanding of the propagation process and could even shed light on the role that CRs have in the evolution of the Galaxy. We will discuss such topics in  <ref>.§ ACCELERATION PHASE §.§ Gamma-ray emission from SNRsIt is beyond any doubt that SNRs can accelerate CRs. The question is rather which is the total amount of energy channeled intorelativistic particles and which is the final spectrum injected into the ISM. γ-ray observations provide a privileged tool to answer these questions, allowing to directly infer the properties of accelerated hadrons, something that cannot be done in any other wavelength.It is well know that the γ-ray emission can be either produced by hadronic processes (π^0 decay from hadronic collisions) or by leptonic one (inverse Compton – IC) and often it is not easy to distinguish between the two processes. On a general ground, the detection of hadronic emission would favor the high efficient acceleration scenario, while the leptonic interpretation forces one to assume a quite low acceleration efficiency. This binomial is mainly due to the differentmagnetic field strength required: IC scenario usually needs very low magnetic field (∼ 10 μG, comparable with the ISM value)in order to simultaneously account for radio, X-ray and γ-ray emission. On the contrary, hadronic scenarios requires much larger values, of the order of few hundreds μG, that cannot result from the simple compression of interstellar magnetic field, but requires some sort of amplification. Indeed, the magnetic field amplification is thought to be a manifestation itself of an efficient acceleration process (see  <ref>).I real cases it is not always easy to distinguish between leptonic and hadronic scenarios. The case of RX J1713 is very significative to this respect. This remnant has been considered for long time the best candidate for an efficient acceleration scenario, mainly due to its high γ-ray luminosity. The detection of γ-ray emission in the range [1-300 GeV] by the Fermi-LAT satellite have shown an unexpected hard spectrum which, at a first glance, seems to be more in agreement with a leptonic scenario.Nevertheless, a deeper analysis shows that neither the hadronic nor the leptonic scenarios, taken in their simplest form, can unequivocally explain the observations <cit.>, hence the issue remains open.From a theoretical point of view the diffusive shock acceleration (DSA) predicts a power low in momentum ∝ p^-4 (which translates into an energy spectrum ∝ E^-2 for relativistic energies). A remarkable property of DSA is that this prediction turns out to be very solid: even when the complex chain of non-linear effects is taken into account, the deviation from a straight power law are very mild.On the other hand, observations shows a much more varied phenomenology. The spectra inferred from γ-ray observations ranges from E^-2.8 up to E^-1.5, with an average slope ≈ -2.2÷ -2.3. Some authors have suggested the presence of a general trend of the γ-ray spectrum with the age <cit.>. The idea is summarized in the Figure <ref>: very young SNRs (still in the free expansion phase) show a slight steep power law with index ≈ E^-2.2÷ E^-2.4 (like Tycho and Cas A); young SNRs (t ≈ 1000-2000 yr) shows a quite hard spectrum more easily explained in terms of leptonic emission (but see the caveat above); middle aged SNRs show spectra of undoubtedly hadronic origin[Two middle aged SNRs, IC443 and W44, also show the characteristic pion bump around ≈ 280 MeV, un unmistakable signature of hadronic origin of γ-rays <cit.>.], steeper than ∝ E^-2.5. Moreover the first two categories have been detected up to tens of TeV, implying a maximum energy of primary protons up to ≲ 100 TeV (maybe one order of magnitude less for Tycho and Cas A), while the middle aged SNRs have been detected up to much lower energies. Nevertheless, it is hard to say whether such a trend is intrinsic in the SNR evolution or if it is rather due to environmental effects. It has been shown, for example, that the presence of a non negligible fraction of neutral hydrogen can significantly modify the shock structure through the process of charge exchange and ionization, resulting in spectra steeper than E^-2 <cit.>.Another complication may be produced by the presence of dense clumps in the CSM <cit.>. In this case, when the shock crosses the clumps, hydrodynamical instabilities can amplify the magnetic field all around the clump, making difficult for particles at low energies to penetrate inside the clump compared to the most energetic ones. The final result would be a γ-ray spectrum harder then the parent proton spectrum.Finally, steeper spectra may also result from a more subtle aspect of the non linear DSA theory. If the magnetic field is efficiently amplified, the speed of the magnetic turbulence could be enhanced up to a non negligible fraction of the shock speed, hence reducing the effective compression factor felt by accelerated particles <cit.>. Unfortunately such mechanism dependents on plasma conditions which are very difficult to test.§.§ Maximum energy and magnetic field amplification The maximum energy required to explain the CR spectrum observed at Earth is around few PeV for protons and then it scales like the nuclear charge, for heavier nuclei.In the framework of DSA applied to SNRs, there is a general consensus on the fact that the maximum energy is reached before the beginning of the Sedov-Taylor (ST) stage. The reason is that particles diffusing ahead of the shock will be cached by the shock only if the shock is moving faster than the particle. The average displacement of a particle diffusing ahead of the shock is (D t)^1/2 while the shock radius moves like R_s(t) ∝ t^α hence particles will be recaptured only if α > 1/2. But in the ST phase α=2/5, hence particles can escape, while in the free expansion phase α (≲ 1) > 1/2 (the exact value of α slightly depends on the velocity profile of the ejecta and on the density profile of the circumstellar medium). This reasoning became more complicated if we try to account for the time dependence of the magnetic field amplification but, because we do expect the amplification to increase with increasing shock speed, the general feeling is that the maximum energy should stop increasing even before the end of the free expansion phase. Now, the Sedov-Taylor phase begins at t_ ST≈ 50 (M_ eje/M_⊙)^5/6(E_ SN/10^51 erg)^-1/2(n_ ISM/cm^-3)^-1/3 yr, which, for typical values of the parameters,ranges between 50 an 200 years, while the acceleration time is given by t_acc≈ 8 D/u_sh^2. In the framework of linear DSA, the diffusion coefficient is given by D = r_L v /(3 ℱ), where r_L is the Larmor radius of particles and v their speed, while ℱ(k) is the logarithmic power spectrum of magnetic waves with wave-number k.Equating the acceleration time with t_ ST we get the following estimate for the maximum energy:E_max = 5 × 10^13Z F(k_min)( B_0/μ G) ( M_ ej/M_⊙)^-1/6( E_ SN/10^51 erg)^1/2( n_ ISM/ cm^-3)^-1/3eV ,where k_min = 1/r_L(E_max) is the wave number resonant with particles at maximum energy. We notice that more realistic estimates of the maximum energy (for example accounting for the fact that the shock speed is slightly decreasing also during the ejecta-dominated phase) usually return somewhat lower values. Eq.(<ref>) depends only weakly on the environmental parameters, while depends strongly on the level of magnetic turbulence. As a consequence, the maximum energy of protons could reach few PeV only if F(k_min) ≫ 1, namely the magnetic turbulence at the scale of r_L(E_max) must be much larger than the pre-existing field, i.e. δ B ≫ B_0. Clearly if this condition were realized, the linear theory used to derive the diffusion coefficient would not hold anymore.Apart from that, the value of turbulence in the ISM at scales relevant here is δ B/B_0 ≲ 10^-4 (<cit.>,) hence, in absence of any mechanism able to amplify the magnetic turbulence, SNR shocks could accelerate protons only up to the irrelevant energy of a few GeV. This puzzle has been partially solved by the idea that the same accelerated particles can amplify the magnetic field upstream through the resonant streaming instability while they try to diffuse far away from the shock <cit.>. Nevertheless the resonant instability can only produce δ B ≲ B_0, i.e. F≲ 1, resulting in a maximum energy for protons of 10-100 TeV. Hence more effort is needed to fill up the last decade of energy to reach the PeV. The solution to this conundrum probably resides in other types of instabilities that CRs can excite, the most promising one being the non-resonant Bell instability <cit.>. This instability results from the j×B_0 force that the current due to escaping particles produces onto the plasma and grows very rapidly for high Mach number shocks. However, the scales that get excited are very small compared with the gyration radii of accelerated particles. Hence, it is not clear if the highest energy particles can be efficiently scattered. Indeed, hybrid simulations seem to confirm that the non-resonant Bell instability grows much faster than the resonant one for Mach number ≳ 30 and produces F≫1 <cit.>. The same simulations also show that the instability produces a complex filamentary structure which could be able to scatter particles efficiently.The efficiency of Bell instability is determined by the strength of the return current that balance the current of escaping CRs which, in turn, depends on the density of the circumstellar medium and on the shock speed. Indeed, <cit.> showed that the conditions to reach PeV energies are rather special and can be obtained only during the first few weeks or decades following the explosion of stars occurring in dense circumstellar winds. On the contrary, type Ia SNe, occurring in a less dense ISM, should be able to produce only ∼ 100 TeV protons. A similar conclusion has been reached by <cit.>.If this scenario would be confirmed, it would not be a surprise the fact that we have not detected yet a single SNR acting as a PeVatron. The chances for such a discovery will be surely enhanced with the forthcoming Cherenkov facilities (CTA).We conclude this section recalling that one of the most remarkable finding of the last decade is that large magnetic field strength, much larger than the average galactic one, have been inferred in almost all young SNRs, through the observation of thin X-ray filaments at the forward shocks <cit.>. This provide a strong support to the presence of some amplification mechanisms. One should keep in mind, however, that the magnetic amplification can increase E_max only if it occurs both upstream and downstream of the shock otherwise particles could escape either from one side or the other. Having a magnetic amplification downstream only is quite an easy task: in fact, the shocked plasma is usually highly turbulent and hydrodynamical instabilities can trigger the amplification, converting a fraction of the turbulent motion into magnetic energy, as shown by <cit.>. Conversely, there are no reasons, in general, to assume that the plasma where a SNR expands is highly turbulent to start with. Indeed, evidences that the amplification occurs also upstream have been collected from theX-ray observations of SNR shocks <cit.> (see also Figure <ref>) confirming that CRs are the main agent to drive such a process. § ESCAPING FROM THE SOURCES In the test-particle picture of DSA, accelerated particles are advected downstream of the shock and will be confined in the interior of the SNR until the shock disappears and the SNR merges into the ISM. At that point particles will be released in the ISM but they would have lost part their energy because of the adiabatic expansion of the remnant: hence the requirements in terms of maximum energy at the source would be even more severe than they already are.Therefore, effective escape from upstream, while the acceleration is still ongoing, is fundamental if high energy particles must be released in the ISM.The description of how particle escape from a SNR shock has not been completely understood yet, the reason being the uncertainties related to how particles reach the maximum energies (a careful description of the numerous problems involved can be found in <cit.>). Below we just describe the general framework.Let us assume that the maximum momentum, p_max,0, is, indeed, reached at the beginning of the ST phase as discussed in <ref> and, afterwards, it drops with time as p_max(t) ∝ (t/t_ ST)^-β, with β > 0. The energy in the escaping particles of momentum p is4 π f_ esc(p)pc p^2 dp = ξ_ esc(t)1/2ρ u_ sh^3 4 π R_ sh^2 dtwhere ξ_ esc(t) is the fraction of the incoming energy flux, 1/2ρ u_ sh^3 4 π R_ sh^2, that is converted into escaping flux. If the expansion occurs in a homogeneous medium with R_ sh∝ t^α and V_ sh∝ t^α-1, therefore, since dt/dp ∝ t/p, from Eq. (<ref>) we have:f_ esc(p) ∝ p^-4 t^5α-2ξ_ esc(t).It follows that in the ST phase, where α=2/5, the spectrum released in the ISM is f_ esc(p) ∝ p^-4 only if ξ_ esc keeps constant with time. It is worth stressing that this p^-4 has nothing to do with the standard result of the DSA in the test-particle regime. Neither does it depend on the detailed evolution in time of the maximum momentum. It solely depends on having assumed that particles escape the SNR during the adiabatic phase. Notice also that in more realistic calculations ξ_ esc should decreases with time, leading to a spectrum of escaping particles which is even harder than p^-4 <cit.>. On the other hand, the total spectrum of particles injected into the ISM by an individual SNR is the sum of two populations: particles escaping during the acceleration and particles released after the shock dissipates. The latter component is not modified by the escaping process and should reflect closely the accelerated spectrum.An interesting byproduct of this scenario is that the intersection of the two populations may tentatively be identified with the change in spectral index at ∼ 300 GeV detected by PAMELA and AMS-02<cit.>. We will discuss this feature more in details in the next section.A further complication arises when one try to account for the environment when SNR expands. While type Ia SN are thought to expand in a pretty homogeneous ISM, core-collapse SNe expand into the wind/bubble system generated by the progenitor stars. In such a situation the remnant evolution differs significantly from the classical ST solution and could results in CR spectra different from the the standard p^-4. In spite of such complications, it seems that when one sum the contribution of CRs from different type of SNRs, the final results approach p^-4 <cit.>. Nevertheless one should take this result with caution, in that it is obtained extrapolating the effect of streaming instability calculated in the linear regime, to the non linear one, and neglecting the more complicated effect of the non-resonant instability, which should play a major role when the acceleration efficiency is high. In conclusion, the theory does not provide yet a solid answer to the question of escaping from sources. Therefore, it is of the outermost importance to look for possible observational constraints that can help to understand such a process.MC-SNR associations.A possible way to investigate the escaping process is to look for γ-ray emission from MC-SNR associations. When MCs are located close enough to a SNR, escaping CRs can “illuminate” the cloud, producing γ-rays through interaction with the cloud gas. The resulting γ-rays can provide information on the escaping CR spectrum but also on the diffusion coefficient in the region between the SNR and the MC, which determines how fast particles escape <cit.>. MC-SNR systems could also be used to shed light on the lowest energy part of the CR spectrum, namely below the pion threshold of ∼ 280 MeV, allowing us to test models on CR acceleration and propagation over an energy intervalspanning from the MeV to the TeV domain. Such impressive result can be reached looking for the ionization level of MC <cit.>.In fact, even though CRs with energy below ∼ 280 MeV carry a subdominant fraction of the bulk energy of CRs, they nevertheless play a pivotal role in regulating the properties of the interstellar medium, by ionising and heating the gas, and thus driving interstellar chemistry in MCs, where ionizing photons cannot penetrate <cit.>.Moreover, being the main regulators of the ionization fraction inside MCs, low energy CRs set the level of coupling between magnetic field and gas and thus influence the process of formation of stars and planets.Hence, finding observational ways to trace them and quantify these effects is of prime interest. A remarkable result has been already achieved studying, among the others, the MC complex around the SNR W28, where the MC closest to the SNR shows a strong enhancement of the ionization rate <cit.>signature of the fact that low energy CRs has reached only the closest MC, while high energy CRs also reached the more farther MCs, producing the characteristic γ-ray emission. § PROPAGATION THROUGH THE GALAXYAfter leaving the near-source region, CRs start their journey across the Galaxy and eventually escape from the Galactic environment. In the standard picture used to describe Galactic propagation, CRs are produced by sources located in the thin Galactic disc and then diffuse in a larger volume where the presence of some magnetic turbulence is thought to scatter efficiently CRs. Beyond such magnetic halo, the turbulence is assumed to vanish such that particles can escape freely into the intergalactic space and the CR density reduces to ≈ 0 <cit.>. The presence of a magnetic halo with a thickness H much larger than the disc thickness, h, is required by the estimate of the residence time of CRs into the Galaxy which, assuming a diffusion coefficient D(E) spatially constant across the whole halo, is given by the simple expression τ_ esc=H^2/D(E). Such quantity can be derived from the flux of radioactive nuclei present in the CR spectrum (like ^10Be, ^26Al and ^36Cl, see <cit.>)plus the ratio of secondary over primary (stable) nuclei (among which the one measured with better accuracy is B/C). Combining those information one gets the halo size H≈ 3÷5 kpc and the diffusion coefficient D(R) ≃ 1÷5 × 10^28 (R/GV)^-δ cm^2 s^-1, with δ≈ 1/3, in good agreement with the Kolmogorov theory of turbulence <cit.>. Those values give τ_ esc≈ 150 Myr for particle's rigidity around 1 GV [Notice that the residence time estimated by <cit.> using radioactive nuclei is only ∼ 15 Myr. This result is obtained in the framework of the leaky-box model and is, to some extend, misleading in that it does not account for the spatial extention of the halo. It is known that when the decay time of unstable nuclei is smaller than the residence time, the leaky-box model gives incorrect results <cit.>.].An independent proof to the existence of a magnetic halo comes from observations in the radio band which points towards the presence of diffuse synchrotron emission, revealing the presence of electrons and magnetic field above and below the galactic plane <cit.>. The determination of the halo size in this case is not an easy task, partially due to the yet unknown structure of Galactic magnetic field. Nevertheless, comparison with numerical models for the CR electron distribution favors a halo size of ∼ 10 kpc <cit.> while a size ≲ 2 kpc seems to be strongly disfavored (<cit.>). It is worth mentioning that radio halos with a similar extension have been observed in others spiral galaxies observed edge-on(e.g. NGC 4631, NGC 891). Nevertheless, the halo model, at least in its basic version, does not provide a coherent physical picture. The value of H required to explain the data is not well understood: it should be consistently determined by the spatial extension of the magnetic turbulence produced inside the Galactic disc (mainly by SN explosions), rather than used as a fit parameter.Moreover it is difficult to imagine a sharp boundary where the particle transport abruptly changes from diffusion to free streaming regime and it is also unclear if such a boundary should be the same at all energies, as naively assumed. Finally,the assumption that the diffusion coefficient should be equal everywhere in the halo is also a strong simplification. The above criticisms are strengthened by the recent discovery of several anomalies both in the CR spectrum measured at Earth and in the diffuse Galactic γ-ray emission which both suggest a more complex situation with respect to the toy model depicted above. A detailed discussion of all these aspects can be found in <cit.> while here we restrict our attention on two main results.We start considering the anomalies in the local CR spectrum. Recent measurements carried out by PAMELA and AMS-02 have revealed that the spectra of protons and Helium are characterized by a spectral hardening above ∼ 300 GV <cit.>. Based on preliminary results by AMS-02, shown by<cit.>, a similar break might be present also in the spectra of heavier nuclei as shown in Fig. <ref>. In standard approaches to CR transport, these findings can be accommodated imposing breaks in either the diffusion coefficient or the injection spectrum. We already discussed in the previous section a possible way to produce a break in the injected spectrum resulting from the superposition of two components: one due to particles escaping during the acceleration process and the other due to particles released after the SNR dissolution. Nevertheless, it is not clear if such a model remains viable once the integration over many SNRs is taken into account.Much attention has been payed to a possible modification of the transport process in the Galaxy.<cit.>, and more recently <cit.>, proposed a scenario where the diffusion coefficient changes with the distance from the Galactic plane. In particular, <cit.> discussed a two-zone halo model, where the dependence of the diffusion coefficient on the particle's energy is harder in the outer halo and softer to the inner one. Such a model can well explain the observed break in the CR spectrum, but remains only a phenomenological description, lacking of a more deep physical explanation of the halo's structure. A more physically motivated scenario has been put forward by <cit.> and <cit.>, based on the idea that the magnetic turbulence could have two distinct components: one self-generated by the same CRs through the streaming instability and one resulting from the Kolmogorov cascade of turbulence injected at larger scale (presumably by SN explosions). In this scenario the diffusion is dominated at low energies by self-generated waves, while at larger energies, where the CR energy density becomes too low to generate a significant amount of waves, the Kolmogorov turbulence dominates and the spectral break in the CR spectrum should mark the energy where this transition occurs.One should keep in mind, however, that direct measurements can only probe the local CRs spectrum, hence the question whether such a spectrum reflects some peculiarities of the local environment, rather than being representative of the average Galactic CR spectrum, still remains open. Relevant information on the CR distribution in the rest of the Galaxy can be obtained looking at the diffuse γ-ray emission which is thought to originate mainly from π^0 decay produced by collisions of CRs with the interstellar gas. Recent results presented by the Fermi-LAT collaboration <cit.> and independently by <cit.>, show a substantial variation of the CR spectrum as a function of the distance from the Galactic Center. The spatial distribution of the CR density in the outer Galaxy appears to be weakly dependent upon the galactocentric distance, as found in previous studies as well, while the density in the central region of the Galaxy was found to exceed the value measured in the outer Galaxy. At the same time, Fermi-LAT data suggest a gradual spectral softening while moving outward from the center of the Galaxy to its outskirts, with a slope ranging from 2.6, at a distance of ∼ 3 kpc, to 2.9 in the external regions. These findings represent a challenge for standard calculations of CR propagation based on assuming a uniform diffusion coefficient within the Galactic volume and may point towards a spatial-dependence of the CR transport properties, as already discussed by <cit.> and <cit.>. Indeed, this behavior could reflect, once again, the presence of self-generated turbulence which should dominate the particle scattering at low energies, the ones probed by Fermi-LAT. <cit.>, indeed, shows that the combination of self-generated turbulence plus the advection due to the wave motion at the Alfvén speed can account for both the spatial CR density and the spectral slope (see Figure <ref>). This result depends, to some extend, on the assumption of the Galactic magnetic field structure, which determines the effectiveness of the streaming instability. Galactic winds. We conclude this section discussing an interesting development of CR transport models that may have a large impact on the overall Galaxy evolution. Many galaxies produces a large scale wind that may affect star formation, through the regulation of the amount of gas available <cit.> and inject hot gas in the galactic halo. Signatures for the presence of such hot gas have been presented also for the Milky Way through the detection of absorption lines and continuum emission in the X-ray band <cit.>.Galactic winds are thought to bepowered by either SN explosions or starburst radiation <cit.>, but in our Galaxy the power injected by SNe and stars is too small to drive a large scale wind.On the other hand, the escape of CRs from the Galaxy inevitably leads to a gradient in the CR pressure that acts as a force on the background plasma, in the direction opposite to the gravitational pull <cit.>. This force, added to the thermal pressure gradient, may contribute to lunch a galactic wind if the gas can be accelerated to supersonic speeds, otherwise the material can be lifted up and fall down in what is known as Galactic fountains.The existence of a Galactic wind would significantly change the transport properties of CRs. The main change is due to the presence of an advection speed, u_ adv(z), which increases moving from the Galactic plane towards larger distances. As a consequence, above some distance, s_*(E), given by the conditions_*^2/D(E)≈s_*/u_ adv(s_*) , the advection will dominate over diffusion and particles will be unable to diffuse back to the Galaxy. In other words, this distance s^* represent an effective energy-dependent boundary which replaces the role of the halo size H. Such a model is particularly attractive because s_* is not artificially imposed, like H, but it is rather self-consistently given by the wind solution <cit.>.Nevertheless, the wind model that includes only a diffusion coefficient due to self-generated turbulence, predicts a CR spectrum remarkably different from the one observed at Earth, especially at high energies where it is too steep. A better agreement with observations can be obtained assuming a pre-existing turbulence only in the near disc region <cit.>.In conclusions wind models seem to have a great potential to provide a comprehensive picture of the CR propagation, but at the moment such models are still immature and a lot of work needs to be done to include self consistently several relevant piece of physics like plasma cooling and heating processes, magnetic field damping and cascade of large scale turbulence. The complexity of such task will probably require time dependent simulations.§ CONCLUSIONS We presented a brief overview of some recent advancement in the field of CR physics, discussing separately the three main pillars that constitute the SNR paradigm for the origin of CRs: acceleration, escape and propagation. Among the three, the acceleration mechanism is the most advanced field, being under investigation since the '70s. The non linear version of DSA, able to account for the back reaction of accelerated particles onto the shock structure, the magnetic field amplification, as well as the back reaction of the magnetic field itself, has provided a robust framework to interpret the rich phenomenology observed in SNRs. Several predictions has been verified, like, for example, the thin X-ray filaments observed in almost all young SNRs, interpreted as the result of strong magnetic field amplification produced by CRs. In spite of these successes the theory is still unable to provide a firm interpretation of the particle spectrum inferred from non-thermal emission of SNRs, especially in the γ-ray band. In fact, the theory predicts particle spectra proportional to a power low in energy E^-2, with possible slight deviation due to non linear effects. On the contrary, observations shows spectral slope ranging from -1.5 to -2.5 for young remnants, while in the case ofmiddle aged SNRs the spectrum can be even steeper. This rich phenomenology is probably due to environmental effects, which are difficult to model. Moreover, the possible implications of those effects on the overall Galactic CR spectrum are, at the moment, unclear.The escaping process has never been studied in great details, due to some theoretical difficulties but also to the lack of data coming from the escaping particles. A possible way to study this mechanism is through the γ-ray emission from SNR-MC associations. Few such complex have been already detected in γ-rays, but the future Cherenkov Telescope Array is expected to find tens of those systems, paving the way to a systematic study.Finally the propagation process has received a lot of attention in the last few years, thanks to an avalanche of new data coming from both direct measurements (AMS-02 and PAMELA) and to indirect informations inferred from the diffuse γ-ray emission detected by Fermi-LAT. Thanks to those data, the basic propagation model start to shows its limitations, forcing the community to build more refined models. In particular I think we are close to a better understand of the magnetic turbulence that determine the CR propagation properties and, connected to this, also on the structure of the magnetic halo that surrounds the Galaxy.It is worth mentioning that others anomalies found in the CR spectrum and not discussed here, like the rising positron fraction and the anti-proton flux, are driving new ideas that are questioning the basic pillars of the CR propagation <cit.>. Whether these ideas can provide viable alternatives to the current paradigm for the origin of CRs is a matter of debate, which shows, anyhow, that the CR physics is undergoing a new phase of growing, potentially rich of new discoveries.99[Acero (2017)]Acero2017 Acero, F., Ackermann, M., Ajello, M., Albert, A.,2017 ApJS 223, 26[Adriani (2011)]Adriani+2011 Adriani, O., et al. 2011, Science, 332, 69[Aguilar (2016)]Aguilar+2016 Aguilar, M., Ali Cavasoza, L., Ambrosi, G.,2016, Phys. Rev. Lett., 117, 231102[Aguilar (2015a)]Aguilar+2015a Aguilar, M., et al., 2015a, Phys. Rev. Lett., 114, 171103 [Aguilar (2015b)]Aguilar+2015b Aguilar, M., et al., 2015b, Phys. Rev. Lett., 115, 211101[Aloisio & Blasi (2013)]Aloisio Blasi2013 Aloisio, R., & Blasi, P. 2013, JCAP, 7, 001[Aloisio (2015)]Aloisio+2015 Aloisio, R., Blasi, P., & Serpico, P. D. 2015,A&A, 583, A95[Amato & Blasi (2017)]AmatoBlasi:2017 Amato, E. & Blasi, P. 2017, preprint arXiv:1704.05696[Armstrong (1981)]Armstrong+1981 Armstrong, J. W., Cordes, J. M., and Rickett, B. J. 1981, Nat, 291, 561[Ballet (2006)]Ballet2006 Ballet, J. 2006, Advances in Space Research, 37, 1902[Bell (2013)]Bell+2013 Bell, A. R., Schure, K. M., Reville, B., Giacinti, G. (2013) MNRAS, 431, 415[Bell (2015)]Bell:2015 Bell, A. R. (2015) MNRAS, 447, 2224[Berezinski (1990)]Berezinski1990 Berezinski, V. S., Bulanov, S. V., Dogiel, V. A., Ptuskin, V. S. 1990, book edited by Ginzburg, V.L., published by North-Holland (Amsterdam)[Beuermann (1985)]Beuermann+1985 Beuermann, K., Kanbach, G., Berkhuisjen, E. M. 1985, A&A, 153, 17.[Blasi (2012)]Blasi+2012 Blasi, P., Amato, E., & Serpico, P. D. 2012, Phys. Rev. Lett., 109, 061101[Blasi (2013)]Blasi:2013 Blasi, P. 2013, A&ARv, 21, 70[Blum (2013)]Blum+2013 Blum, K., Katz, B., Waxman, E. 2013, Phys. Rev. Lett., 111, 1101[Breitschwerdt (1991)]Breitschwerdt+1991 Breitschwerdt, D., McKenzie, J. F., & Voelk, H. J. 1991, A&A, 245, 79[Caprioli (2010)]Caprioli-AB:2010 Caprioli, D., Amato, E., Blasi, P. 2010, Astropart. Phys., 33,160[Caprioli (2014)]Caprioli-Spitkovsky:2014 Caprioli, D. & Spitkovsky, A. 2014, ApJ, 794, 46[Cardillo (2015)]Cardillo+2015 Cardillo, M., Amato, E., Blasi, P. (2015) APh, 69, 1[Crain (2007)]Crain+2007 Crain, R. A., Eke, V. R., Frenk, C. S.,2007, MNRAS, 377, 41[Dalgarno (2006)]Dalgarno2006 Dalgarno, A. 2006, PNAS, 103, 12269[Di Bernardo (2015)]diBernardo+2015 Di Bernardo, G., Grasso, D., Evoli, C., Gaggero, D. 2015, ASTRA Proceedings, 2, 21[Dogiel (2002)]Dogiel+2002 Dogiel, V. A., Schonfelder, V., & Strong, A. W. 2002, ApJ Lett., 572, L157[Drury (1983)]Drury:1983 Drury, L'O.C. 1983, RPPh, 46, 973[Drury (2011)]Drury:2011 Drury, L'O.C. (2011) MNRAS 415, 1807[Everett (2008)]Everett+2009 Everett, J. E., Zweibel, E. G., Benjamin, R. A.,2008, ApJ, 674, 258-270[Eriksen (2011)]Eriksen+2011 Eriksen, K., Hughes, J. P., Badenes, C.,2011, ApJ, 728, L28[Evoli (2012)]Evoli+2012 Evoli C., Gaggero D., Grasso D., Maccione L. 2012, Phys. Rev. Lett., 108, 211102[Funk (2015)]Funk2015 Funk, S. 2015, Ann. Rev. of Nuc. and Part. Sci., 65, 245.[Gabici (2009)]Gabici+2009 Gabici, S., Aharonian, F. A., Casanova, S. 2009, MNRAS, 396, 1629[Gabici & Montmerle (2015)]GabiciMontmerle2015 Gabici, S., Montmerle, T. 2015, proceeding to the ICRC 2015, 34, 29[Gabici & Aharonian (2016)]GabiciAharonian2016 Gabici, S. & Aharonian, F. 2016, EPJ Web of Conferences, Vol. 121, id.04001[Gabici & Aharonian (2014)]GabiciAharonian2014 Gabici, S. & Aharonian, F. 2014, MNRAS, 445, L70[Gaggero (2015)]Gaggero+2015 Gaggero D., Urbano A., Valli M., Ullio P. 2015, Phys. Rev. D, 91, 083012[Giacalone & Jokipii (2007)]Giacalone-Jokipii:2007 Giacalone, J. & Jokipii, J. R. 2007, ApJ, 663, L41[Giacinti & Bell (2015)]Giacinti Bell:2015 Giacinti, G., Bell, A. R. (2015), ICRC proceeding, 34, 519[Green (2015)]Green2015 Green, D. A. 2015, MNRAS, 454, 1517[Ipavich (1975)]Ipavich1975 Ipavich, F. M. 1975, ApJ, 196, 107[Katz (2010)]Katz+2010 Katz, B., Blum, K., Morag, J., & Waxman, E. 2010, MNRAS, 405, 1458[Lipari (2016)]Lipari2016 Lipari, P. 2016, Ph. Rev. D., 95, 3009[Morlino (2009)]Morlino+2009 Morlino, G., Amato, E., Blasi, P. 2009, MNRAS, 405, 21[Morlino (2010)]Morlino+2010 Morlino, G., Amato, E., Blasi, P., Caprioli, D. 2010, MNRAS, 405, L21[Morlino & Caprioli (2012)]MorlinoCaprioli:2012 Morlino, G. & Caprioli, D. 2012, A&A, 538, 81[Morlino (2013)]Morlino+2013 Morlino, G., Blasi, P., Bandiera, R., Amato, E., Caprioli, D. 2013, ApJ, 768, 148[Morlino & Blasi (2016)]MorlinoBlasi2016 Morlino, G. & Blasi, P. 2016, A&A, 589, 7[Miller & Bregman (2015)]MillerBregman2015 Miller, M. J., Bregman, J. N. 2015, ApJ, 800, 14[Orlando & Stong (2015)]OrlandoStrong:2013 Orlando, E. & Strong, A. 2013, MNRAS, 436, 2127[Ptuskin (1997)]Ptuskin+1997 Ptuskin, V. S., Voelk, H. J., Zirakashvili, V. N., & Breitschwerdt, D. 1997, A&A, 321, 434[Recchia (2016a)]Recchia+2016a Recchia, S., Blasi, P., Morlino, G. 2016, MNRAS, 462, 88[Recchia (2016b)]Recchia+2016b Recchia, S.; Blasi, P.; Morlino, G.2016, MNRAS, 462, 4227[Reville & Bell(2012)]Reville-Bell:2012 Reville, B. & Bell, A. R. 2012, MNRAS, 419, 2433[Stinson (2013)]Stinson+2013 Stinson, G. S., Brook, C., Macció, A. V.,2013, MNRAS, 428, 129[Tomassetti (2012)]Tomassetti2012 Tomassetti, N. 2012, ApJ. Lett., 752, L13[Vaupré (2014)]Vaupre+2014 Vaupré, S., Hily-Blant, P., Ceccarelli, C., Dubus, G., Gabici, S., Montmerle, T. 2014, A&A, 568, 50[Vink (2012)]Vink2012 Vink, J. 2012,A&ARv, 20, 49[Yan (2017)]Yan2017 Yan, Q. 2017, CR nuclei measurements with AMS-02: latest results and perspectives, Presented at XSCRC2017: Cross sections for Cosmic Rays, CERN[Yanasak (2001)]Yanasak2001 Yanasak, N. E., Wiedenbeck, M. E., Mewaldt, R. A., Davis, A. J., et al. 2001, ApJ, 563, 768[Yang (2017)]Yang+2017 Yang, R., Aharonian, F., Evoli, C. 2017, PhRvD, 93,123007
http://arxiv.org/abs/1706.08275v1
{ "authors": [ "G. Morlino" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170626082657", "title": "Supernova Remnant-Cosmic Ray connection: a modern view" }
Efficiency of quantum versus classical annealing in non-convex learning problems Carlo Baldassi^1,2 and Riccardo Zecchina^1,3 December 30, 2023 ================================================================================ Fyodorov, Hiary & Keating established an intriguing connection between the maxima of log-correlated processes and the ones of the Riemann zeta function on a short interval of the critical line. In particular, they suggest that the analogue of the free energy of the Riemann zeta function is identical to the one of the Random Energy Model in spin glasses. In this paper, the connection between spin glasses and the Riemann zeta function is explored further. We study a random model of the Riemann zeta function and show that its two-overlap distribution corresponds to the one of a one-step replica symmetry breaking (1-RSB) spin glass.This provides evidence that the local maxima of the zeta function are strongly clustered. § INTRODUCTION AND MAIN RESULT §.§ BackgroundThe Riemann zeta function is defined onbyζ(s)= ∑_n≥ 11/n^s=∏_p primes(1-p^-s)^-1 if s>1,and can be analytically continued to the whole complex plane by the functional equationζ(s)=χ(s) ζ(1-s) , χ(s)=2^s π^s-1sin(π/2s)Γ(1-s) .Trivial zeros are located at negative even integers where χ(s)=0. The non-trivial zeros are restricted to the critical strip 0≤ s≤ 1.The Riemann hypothesis states that they all lie on the critical line s=1/2. A weaker statement, yet with deep implications on the distribution of the primes, is the Lindelöf hypothesis which stipulates that the maximum of ζ on a large interval [0,T] of the critical line grows slower than any power of T, i.e. ζ(1/2+ T) is (T^) for any >0, see e.g. <cit.>. Mathematical physics has provided several important insights in the study of the Riemann zeta function over the years. We refer the reader to <cit.> for a broad discussion on this topic.We briefly highlight three contributions from statistical mechanics and probability.First, there are deep connections between the statistics of eigenvalues of random matrices and the zeros of zeta as exemplified by the Montgomery's pair correlation conjecture, see for example <cit.>. Second, the Riemann hypothesis can be recast in the framework of Ising models of statistical mechanics where it bears a resemblance to the Lee-Yang theorem. This perspective was investigated in details by Newman <cit.>. It led to an equivalent reformulation of the Riemann hypothesis in terms of the exact value of the de Bruijn–Newman constant <cit.>, see <cit.> for a numerical estimate of the constant and <cit.> for a proof that the constant is non-negative. Third, Fyodorov, Hiary & Keating <cit.> and Fyodorov & Keating <cit.> recently unraveled a striking connection between the local statistics of the large values of the Riemann zeta function on the critical line and the extremes of a class of disordered systems, the log-correlated processes, that includes among others branching Brownian motion and the two-dimensional Gaussian free field. This connection has also been extended recently to the theory of Gaussian multiplicative chaos by Saksman & Webb <cit.>. The Fyodorov-Hiary-Keating conjecture is as follows <cit.>: if τ is sampled uniformly on a large interval [T,2T], then the maximum on a short interval, say [0,1], around τ ismax_h∈[0,1]log| ζ(1/2+ (τ+h)|=loglog T -3/4logloglog T+ℳ_T ,where (ℳ_T) is a sequence of random variables converging in distribution.The deterministic order of the maximum corresponds exactly to the one of a log-correlated process, such as a branching random walk and the two-dimensional Gaussian free field, see for example <cit.> for more background on this class of processes. The precise value of the leading order can be predicted heuristically since the process for logζ has effectively log T distinct values on [0,1] (because there are on average log T zeros on [0,1], see for example <cit.>), and the marginal distribution of log |ζ(1/2+i(τ+h))| should be close to Gaussian with variance 1/2loglog T as predicted by Selberg's Central Limit Theorem <cit.>. The log-correlations already appear at the level of the typical values from the multivariate CLT proved in <cit.>. The first order of the conjecture (<ref>) was proved recently in parallel: conditionally on the Riemann hypothesis in <cit.>, and unconditionally in <cit.>.The evidence in favor of the conjecture laid out by Fyodorov & Keating <cit.> suggests that the large values of the Riemann zeta function locally behaves like a disordered system of the spin-glass type characterized by an energy landscape with multiple minima, see Figure <ref>.In particular, by considering -log |ζ(1/2+(τ+h))| as the energy of a disordered system on the state space [0,1], they predict that the analogue of the free energy is in the limitlim_T→∞1/loglog Tlog(log T ·∫_0^1 |ζ(1/2+ i(τ+h))|^β dh)=1+β^2/4if β<2, βif β≥ 2,similarly to a Random Energy Model (REM) with log T independent Gaussian variables of variance 1/2loglog T. In this paper, we explore the connection with spin glasses further by providing evidence that log|ζ| behaves locally like a spin glass with one-step replica symmetry breaking (1-RSB), cf. Theorem <ref>.More precisely, we study a simple random model introduced by Harper <cit.> for the large values of log |ζ|. We show that two points sampled from the Gibbs measure at low temperature have correlation coefficients (or overlap) 0 or 1 in the limit, similarly to a 1-RSB spin glass. We expect that part of our approach could be extended to prove a similar result for the Riemann zeta function itself as stated in Conjecture <ref> below.§.§ The model and main resultLet (U_p, pprimes) be IID uniform random variables on the unit circle in . We writefor the expectation over the U_p's.We study the stochastic processX_h=∑_ p≤ T(U_p p^-ih)/p^1/2,h∈ [0,1] .We drop the dependence on T in the notation for simplicity.The process (X_h, h∈ [0,1]) is a good model for the large values of log |ζ(1/2+(τ+h))|, h∈ [0,1], see <cit.> for more details. For example, it is known that the deterministic order of max_h∈[0,1]X_h corresponds to the one in (<ref>), as proved in <cit.>. Roughly speaking, the process X_h corresponds to the leading order of the logarithm of the Euler product (<ref>) with the identification (p^-τ, pprimes) ⟷(U_p, pprimes) .It is easily checked by computing the joint moments that the above identification is exact as T→∞ in the sense of finite-dimensional distribution. The covariance can be calculated using the explicit distribution of the U_p's: [X_hX_h']=∑_p≤ T∫_0^2π1/2(e^ (θ-hlog p)+e^- (θ-hlog p))·1/2(e^ (θ-h'log p)+e^- (θ-h'log p)) θ/2π= 1/2∑_p≤ Tcos(|h-h'|log p)/p .We are interested in the correlation coefficient or overlap (in the spin glass terminology):ρ(h,h') =[X_hX_h']/√([X_h^2] [X_h'^2]) ,for a given pair (h,h').Any sum over primes can be estimated using the Prime Number Theorem <cit.>, which gives the density of the primes up to very good errors,#{p≤ x: pprime}=∫_2^x 1/log y y+ (xe^-c√(log x)) .(The error term, which is already more than sufficient for our purpose, is improved under the Riemann hypothesis.) In particular, this can be used to rewrite the covariances as (see Lemma <ref> below for details), [X_h^2]=1/2∑_p≤ Tp^-1=1/2loglog T +(1) [X_hX_h']= 1/2log |h-h'|^-1+(1) .The process (X_h) is said to be log-correlated, since the covariance decays approximately like the logarithm of the distance.The correlation coefficients as a function of the distance becomeρ(h,h')=log |h-h'|^-1/loglog T +(1) ,for |h-h'|≥ (log T)^-1.Throughout the paper, we will use the notation f(T)=(g(T)) if f(T)/g(T)→ 0 and f(T)=(g(T)) if f(T)/g(T) is bounded. We will sometimes use f(T)≪ g(T) for shortif f(T)=(g(T)) (the Vinogradov notation).The main result of this paper is the limiting distribution of the correlation coefficient when h and h' are sampled from the Gibbs measure. This is referred to as the two-overlap distribution in the spin-glass terminology. We denote the the Gibbs measure byG_β,T(A)=∫_A e^β X_h/Z_β, ThZ_β,T=∫_0^1 e^β X_hh . For every β>2 and for any interval I⊆ [0,1], lim_T→∞[G_β, T^× 2{(h,h'): ρ(h,h')∈ I}] = 2/β_I(0) + (1-2/β)_I(1) .where _I is the indicator function of the set I. In other words, when h,h' are sampled independently from the Gibbs measure G_β,T, the random variable ρ(h,h')is Bernoulli-distributed with parameter 2/β in the limit T→∞.The limit is exactly the two-overlap distribution of a 1-RSB spin glass. In view of the relation(<ref>) between the correlation coefficient and the distance |h-h'|,the result means that the large values of X_h must lie at a distance (1) or ((log T)^-1). The mesoscopic distances (log T)^-α, 0<α<1 are effectively ruled out. Similar results were obtained for the REM model <cit.>, and log-correlated processes <cit.>. In the spirit of the Fyodorov-Hiary-Keating conjecture, Theorem <ref> suggests that log |ζ| exhibits 1-RSB for β large enough. Consider 𝒢_β(t)=|ζ(1/2+ t)|^β𝒵_β(t)=∫_0^1 𝒢_β(t+h) h .For β>2, and any interval I⊆ [0,1], if τ is sampled uniformly on [T,2T]:lim_T→∞[∫_{(h,h'):ρ(h,h')∈ I}𝒢_β(τ+h)·𝒢_β(τ+h')/ Z_β(τ)^2h h'] =2/β_I(0) + (1-2/β)_I(1).In other words, points h,h' whose ζ-value is of the order of loglog T are at a distance of (1) or ((log T)^-1). The above conjecture implies a strong clustering of the high values of ζ at a scale (log T)^-1 akin to the one observed in log-correlated process <cit.>. In turns, this phenomenon has important consequences for the joint statistics of high values which should be Poissonian at a suitable scale as for log-correlated processes <cit.>. In particular, it is expected that the statistics of the Gibbs weights is Poisson-Dirichlet <cit.>, and that the Gibbs measure converges to an atomic measure on [0,1], see <cit.>. This perspective is studied in <cit.>, and will be discussed further in a forthcoming paper.Acknowledgements. L.-P. A. is supported by NSF CAREER 1653602, NSF grant DMS-1513441, and a Eugene M. Lang Junior Faculty Research Fellowship. W. T. is partially supported byNSF grant DMS-1513441. Both authors would like to thank Frédéric Ouimet for useful comments on a first version of the paper. L.-P. A. is indebted to Chuck Newman for his constant support and his scientific insights throughout the years.§.§ Main Propositions and Proof of the Theorem <ref> The proof of Theorem <ref> is based on a method developed for log-correlated Gaussian processes by Arguin & Zindy <cit.>. It was adapted from a method of Bovier & Kurkova<cit.> for Generalized Random Energy Models (GREM's).The main idea is to relate the distribution of the overlaps with the free energy of a perturbed process.In the present case, the process is not Gaussian and the method has to be modified. To this aim, consider the process at scale α, for 0<α<1, where the sum over primes is truncated at exp((log T)^α),X_h(α)=∑_log p≤ (log T)^α(U_pp^-ih)/p^1/2,h∈ [0,1] .Note that X_h(1)=X_h. For a small parameter |u|<1, we consider the free energy of the perturbed process X_h+uX_h(α) at scale α:F_T(β;α, u)= [log∫_0^1 exp(β (X_h+uX_h(α))h] .The connection between the free energy (<ref>) and the distribution of the correlation coefficients is through Gaussian integration by parts. Of course, for the process X_h, this step is only approximate. It follows closely the work of Carmona & Hu <cit.> and Auffinger & Chen <cit.> on the universality of the free energy and overlap distributions in the Sherrington-Kirkpatrick model.For any 0< α< 1,|∫_0^α [G_β, T^× 2{(h,h'): ρ(h,h')≤ y }]y -2/β^2loglog T∂F_T/∂ u(β;α, 0) |=(1) . The free energy of the perturbed process is calculated using Kistler's multiscale second moment method <cit.>. The treatment is similar to the one of Arguin & Ouimet <cit.> for the perturbed Gaussian free field. The same result can be obtained by adapting the method of Bolthausen, Deuschel & Giacomin <cit.> and Daviaud <cit.> to the model as was done in <cit.>. Kistler's method is simpler and more flexible. The result is better stated by first definingf(β, σ^2)= β^2σ^2/4if β≤ 2/σ, βσ-1 if β≥ 2/σ.For every β>0 and |u|<1, the following limit holdslim_T→∞1/loglog T F_T(β;α, u) =f(β, (1+u)^2α+ (1-α)) if u<0, α f(β, (1+u)^2) + (1-α) f(β,1) if u≥ 0. The theorem follows from the above two propositions. They are proved in Sections 3 and 4 respectively. Estimates on the model needed for the proofs are given in Section 2.We need to show that the distribution of ρ(h,h') converges weakly to 2/βδ_0+(1-2/β)δ_1 where δ_a stands for the Dirac measure at a.Write x_β, T(s) for [G_β, T^× 2{(h,h'): ρ(h,h')≤ s }]. By compactness of the space of probability measures on [0,1], we can find a subsequence of (x_β, T) that converges weakly to x_β as T→∞. We show that the limit x_β is unique and equals x_β(s)=2/β for 0≤ s <1, thereby proving the claimed convergence. By definition of weak convergence, x_β,T(s) converges to x_β(s) at all points of continuity of s. Since x_β is non-decreasing, this implies convergence almost everywhere. Thus, the dominated convergence theorem implieslim_T→∞∫_0^α x_β, T(s) s=∫_0^α x_β(s) s , for 0<α<1.The left-hand side can be rewritten using Proposition <ref> aslim_T→∞∫_0^α x_β, T(s) s = lim_T→∞2/β^2loglog T∂F_T/∂ u(β;α, 0) .Since (((loglog T)^-1 F_T(β;α, u))_T is a sequence of convex functions of u, the limit of the derivatives is the derivative of the limit at any point of differentiability.Here the limit of the expectation of the free energy is given by Proposition <ref>, for u small enough so that β>2/σ whenever β >2,lim_T→∞1/loglog T F_T(β;α, u)= β((1+u)^2α+ (1-α))^1/2-1 if u<0, αβ (1+u) +(1- α) β -1 if u≥ 0.In particular, the expected free energy is differentiable at u=0. Therefore, equations (<ref>), (<ref>) and(<ref>) altogether imply∫_0^α x_β(s) s= α2/β , for 0<α<1.This means that for any 0<α<α'<1 we have1/α'-α∫_α^α' x_β(s) s= 2/β .By taking α'-α→ 0, we conclude from the Lebesgue differentiation theorem that x_β(s)=2/β almost everywhere. Since x_β is non-decreasing and right-continuous, this implies that x_β(s)=2/β for every 0≤ s<1 as claimed.§ ESTIMATES ON THE MODEL OF ZETA In this section, we gather the estimates on the model of zeta needed for the proof of Propositions <ref> and <ref>. Most of these results are contained in <cit.>.We include them for completeness since we will need to deal with a perturbed version of the process (X_h). It is is important to point out that most (but not all!) of these estimates can be obtained for zeta itself with some more work, see <cit.>. The essential input from number theory for the model is the Prime Number Theorem (<ref>).It shows that the density of the primes is approximately 1/log p.This implies, for example, that ∑_p p^-a<∞ for a>1.The equation (<ref>) expressing the log-correlations for h≠ h' is straightforward from the following lemma by taking Δ=|h-h'| and by splitting the sum (<ref>) into the ranges log p≤ |h-h'|^-1 and |h-h'|^-1<log p≤log T. Let 2≤ P<Q<∞. Then for Δ>0, we have∑_P≤ p primes≤ Qcos (Δ·log p)/p = ∫_P^Q cos (Δ·log v)/vlog v v+ (e^-c√(log P))= loglog Q -loglog P +(1)for Δ·log Q≤ 1, (1/Δ·log P)+ (e^-c√(log P)) for Δ·log P≥1. Denote by Li(x)=∫_2^x 1/log y y the logarithmic integal. Write ℰ (x) for the function of bounded variation π(x)- Li(x) giving the error, and f(x) for cos(Δ·log x)/x. Clearly, we have∑_P≤ p≤ Q f(p )=∫_P^Q f(x) π(dx)= ∫_P^Q f(x)/log x dx + ∫_P^Q f(x) ℰ(dx) .It remains to estimate the error term. By integration by parts,∫_P^Q f(x) ℰ(dx)=ℰ(Q ) f(Q )-ℰ(P ) f(P ) - ∫_P^Q ℰ(x) f'(x) dx .Note that f(x) is of the order of 1/x and f'(x) is of the order of 1/x^2. Since ℰ(x)=O(x e^-c√(log x)), the first claimed equality follows. For the dichotomy in the second equality,in the case Δ·log Q≤ 1, we expand the cosine to get after the change of variable y=log x∫_ P^Qf(x)/log x x=∫_log P^log Qcos(Δ· y)/y y=∫_log P^log Q(1/y+(Δ^2· y))y .The result follows by integration. In the case Δ·log P ≥ 1, we integrate by parts to get∫_P^Q f(x)/log x dx=sin(Δ· y)/Δ· y|_log P^log Q+∫_log P^log Qsin(Δ· y)/Δ· y^2 y .Both terms are(1/Δ·log P) as claimed.Proposition <ref> gives an expression for the free energy (<ref>) of the perturbed process at scale α. For simplicity, we denote this process byX_h= (1+u)X_h(α) + X_h(α,1) for X_h(α,1)=X_h-X_h(α), h∈ [0,1]. Note that we recover X_h at u=0. The finite-dimensional distributions of (X_h) can be explicitly computed. In fact, it is not hard to compute explicitly the moment generating function for any increment of (X_h). We will only need the two-dimensional case. Let 0≤α_1<α_2≤ 1. Consider X_h(α_1,α_2)=X_h(α_2)-X_h(α_1). We have for λ, λ'∈ and h,h'∈ [0,1],[exp(λ X_h(α_1,α_2) + λ'X_h'(α_1,α_2)]=C(λ,λ')·exp(1/2∑_log p≤ (log T)^α_2 log p>(log T)^α_11/2p(λ^2+λ'^2+2λλ'cos(|h-h'|log p))) , where C=C(λ, λ') is bounded if λ and λ' are bounded uniformly in T.The expression can be evaluated explicitly as follows. Since the U_p's are independent, we can first restrict the computation to a single p. Straightforward manipulations yield[exp(p^-1/2λ·(U_pp^- h)+ p^-1/2λ'·(U_pp^- h'))] =[exp(aU_p+a̅U_p)]for a=(2p^1/2)^-1(λ p^- h+λ' p^- h'). By expanding the exponentials and using the fact that U_p is uniform on the unit circle, we get[exp(aU_p+a̅U_p)]= ∑_n=0^∞∑_k=0^na^ka̅^n-k/n!n k[U_p^kU_p^n-k]=∑_m=0^∞1/(m!)^2(λ^2+λ'^2+2λλ'cos(|h-h'|log p)/4p)^m= 1+ (λ^2+λ'^2+2λλ'cos(|h-h'|log p)/4p)+(p^-2) ,where the -term depends on λ,λ'. The second equality follows from the fact that the expectation is non-zero only if k=n/2.It remains to take the product over the range of p. The claim then follows from the fact that the sum of p^-2 is finite by (<ref>).Proposition <ref> yields Gaussian bounds in the large deviation regime we are interested in.Indeed, by Chernoff's bound (optimizing over λ), it implies that, for γ>0, (X_h(α_1,α_2)>γloglog T) ≪exp(-γ^2 loglog T/(α_2-α_1))=(log T)^-γ^2/α_2-α_1 ,where we used Lemma <ref> to estimate the sum over primes. This supports the heuristic that X_h(α_1,α_2) is approximately Gaussian of variance α_2-α_1/2loglog T.This implies for X_h in (<ref>)(X_h>γloglog T) ≪exp(-γ^2 loglog T/(1+u)^2α+(1-α))=(log T)^-γ^2/(1+u)^2α+(1-α) .The same can be done for two points h,h'. Using Lemma <ref> again, we get(X_h(α_1,α_2)>γloglog T,X_h'(α_1,α_2)>γloglog T )≪exp(-γ^2 loglog T/(α_2-α_1))if |h-h'|≤ (log T)^-α_2, exp(-2γ^2 loglog T/(α_2-α_1)) if |h-h'|≥ (log T)^-α_1.This can be interpreted as follows. The increments are (almost) independent if the distance between the points is larger than the relevant scales of the increments, and are (almost) perfectly correlated if the distance is smaller than the scales. It is important to note that if α_1>0, then a stronger estimate than the one of Proposition <ref> holds.This is because the sum over primes in (<ref>) is then negligible since it is the tail of a summable series.This means that the constant C(λ,λ') is then 1+(1).This gives a precise Gaussian estimate by inverting the moment generating function (or the Fourier transform if we pick λ,λ'∈ℂ). We omit the proof for conciseness and we refer to <cit.> where this is done using a general version of the Berry-Esseen theorem. For 0<α_1<α_2≤ 1 and 0<γ <1, we have for h∈ [0,1],(X_h(α_1,α_2)>γloglog T) ≫1/√(loglog T)exp(-γ^2 loglog T/(α_2-α_1))=(log T)^-γ^2/α_2-α_1+(1) .Moreover, if |h-h'|>(log T)^-α_1, then(X_h(α_1,α_2)>γloglog T, X_h'(α_1,α_2)>γloglog T) =(1+(1)) (X_h(α_1,α_2)>γloglog T)^2 .Since the process (X_h,h∈[0,1]) is continuous and not discrete, we need a last estimate to control all values in an interval of length corresponding to the relevant scale. This is needed when proving rough bound on the maximum in Lemma <ref>. Heuristically, it says that the maximum of X_h(α_1,α_2) over an interval of width smaller than (log T)^-α_2 behaves like a single value X_h(α_1,α_2). This is done in <cit.> by a chaining argument and we omit the proof for conciseness. Let 0≤α_1<α_2≤ 1. For every h∈[0,1] and γ>0, we have(max_|h-h'|≤ (log T)^-α_2X_h'(α_1,α_2)> γloglog T) ≪ (log T)^-γ^2/α_2-α_1 .In particular, we have(max_|h-h'|≤ (log T)^-1X_h'> γloglog T) ≪ (log T)^-γ^2/α(1+u)+(1-α) . § PROOF OF PROPOSITION <REF>As mentioned in Section 1.3, the proof of Proposition <ref> is based on an approximate Gaussian integration by parts as in <cit.> and <cit.>. The following lemma is an adaptation for complex random variables ofLemma 4 in <cit.> .Let ξ be a complex random variable such that [|ξ|^3]<∞, and [ξ^2]=[ξ]=0. Let F:ℂ→ℂ be a twice continuously differentiable function such that for some M>0, ‖∂_z^2F‖_∞, ‖∂^2_zF‖_∞ <M , where‖ f‖_∞=sup _z∈ℂ| f(z,z)|. Then|[ξ F(ξ,ξ)] - [|ξ|^2] [ ∂ _zF(ξ,ξ)] | ≪ M [|ξ|^3].Since [ξ^2]=[ξ]=0, the left-hand side can be written as[ξ(F(ξ,ξ) -F(0,0)- ξ∂_z F(0,0) - ξ∂_z F(0,0))] - [|ξ|^2][(∂ _zF(ξ,ξ)- ∂ _zF(0,0))] .By Taylor's theorem in several variables and the assumptions, the following estimates hold| F(ξ,ξ) -F(0,0)- ξ∂_z F(0,0) - ξ∂_z F(0,0) |≪ M|ξ|^2| ∂ _zF(ξ,ξ)- ∂ _zF(0,0) |≪ M|ξ| .Therefore the norm of (<ref>) gives|[ξ F(ξ,ξ)] - [|ξ|^2] [ ∂ _zF(ξ,ξ)] | ≪ M ([|ξ|^3]+ [|ξ|^2][|ξ|]) .The claim then follows by Hölder's inequality.As in <cit.>, the lemma can be applied to relate the derivative of the free energy to the two-point correlations of the process. For any p≤ T, we have∂/∂ u[ log∫_0^1 exp(β (X_h(T)+u(U_pp^-ih-1/2))h]|_u=0= β/2[ ∫_[0,1]^21- cos(|h-h'|log p)/pG^× 2_β, T(h,h')] +(p^-3/2) . Write for short ω_p(h)=(2p^1/2)^-1p^- h. Direct differentiation yields at u=0∫_0^1 U_p ω_p(h)G_β, T(h) + ∫_0^1 U_pω_p(h)G_β, T(h) .We make the dependence on U_p in the measure G_β, T explicit. For this, define Y_p(h)=β∑ _q ≤ T p≠ q(U_q q^-ih/q^1/2) .Clearly, Y_p(h) is independent of U_p by definition. ConsiderF_p(z,z)=∫_0^1 ω_p(h) exp(βω_p(h) z+βω_p(h) z+ Y_p(h))h/∫_0^1exp(βω_p(h') z+βω_p(h')z+ Y_p(h'))h' .Note that with this definition, the first integral in (<ref>) is U_pF_p(U_p, U_p) and the second is its complex conjugate.This shows that the derivative of the expectation at u=0 is [U_p· F_p(U_p, U_p)+U_p·F_p(U_p, U_p)] .It remains to apply Lemma <ref> with the function F_p(z,z) and ξ=U_p. Write for short for a function H on [0,1]⟨H⟩^(z,z)=∫_0^1 H(h) e^β( z ω_p(h) +z ω_p(h))+ Y_p(h) h/∫_0^1 e^β( z ω_p(h) +z ω_p(h))+ Y_p(h) h .Direct differentiation of the above yields∂_z⟨H⟩^(z,z)= β(⟨H ω_p⟩^(z,z)-⟨H⟩^(z,z)⟨ω_p⟩^(z,z)) .In particular, for H=ω_p, we get∂_zF_p(z,z)=β(⟨ |ω_p|^2⟩^(z,z)- |⟨ω_p⟩^(z,z)|^2) .When evaluated at z=U_p, this is by definition of ω_p∂_zF_p(U_p,U_p)=β/4∫_[0,1]^2(p^-1-p^-1cos(|h-h'|log p)) G^× 2_β, T(h,h') .Clearly, |ω_p|≤ p^-1/2. Therefore the second derivatives are easily checked to be bounded by (p^-3/2) by applying the formula (<ref>) to each term of (<ref>).The statement of the lemma then follows from Lemma <ref> and (<ref>), after noticing that the second term of (<ref>) is the conjugate of the first.The proof of Proposition <ref> is an application of Proposition <ref> to a range of primes.Recall the definition of ρ(h,h') in equations (<ref>) and (<ref>). On one hand, Fubini's theorem directlyimplies that∫_0^α G_β, T^× 2{(h,h'): ρ(h,h')≤ y }y=∫_[0,1]^2(∫_0^α_{ρ(h,h')≤ r}r)G_β, T(h,h')=∫_[0,1]^2(α-ρ(h,h'))_{ρ(h,h')≤α} G_β, T(h,h') .It remains to check on the other hand that the derivative in the proposition is close to the expectation of the above. Direct differentiation of (<ref>) at u=0 yields by Proposition <ref>∂F_T/∂ u(β;α, 0)= β^2/2∫_[0,1]^2∑_log p≤ (log T)^α[p^-1(1- cos(|h-h'|log p))G^× 2_β, T(h,h')] +(∑_ p≤ e^(log T)^αp^-3/2) . The error term is of order one by (<ref>).Similarly, if |h-h'|≤ (log T)^-α, the sum in the integral is by (<ref>) ∑_log p≤ (log T)^αp^-1(1- cos(|h-h'|log p))=αloglog T - αloglog T +(1)=(1) .On the other hand, if |h-h'|>(log T)^-α, the sum can be divided into three parts∑_log p≤ (log T)^αp^-1 -∑_log p≤ |h-h'|^-1p^-1cos(|h-h'|log p) - ∑_|h-h'|^-1<log p≤ (log T)^αp^-1cos(|h-h'|log p) .When equation (<ref>) is applied to each of the parts, this equalsαloglog T-log |h-h'|^-1 +(1) .Furthermore, recall from (<ref>) that ρ(h,h')loglog T differs from log |h-h'|^-1 by (loglog T).This implies that the conditions on log |h-h'|^-1 can be replaced by ρ(h,h')loglog T at a cost of a term (loglog T) (since the sum would differ by a range of log p of at most (log T) primes).All these observations together imply1/loglog T∑_log p≤ (log T)^αp^-1(1- cos(|h-h'|log p))= (α-log |h-h'|^-1/loglog T)_{ρ(h,h')≤α} +(1) .We finally conclude by putting the right side back in the integral of (<ref>) and by using (<ref>) that2/β^2loglog T∂F_T/∂ u(β;α, 0)= ∫_[0,1]^2(α-ρ(h,h'))[_{ρ(h,h')≤α} G^× 2_β, T(h,h')] +(1) . This matches the first claim (<ref>) by an error (1) thereby proving the proposition.§ PROOF OF PROPOSITION <REF>We write X_h= (1+u)X_h(α) + X_h(α,1) as in equation (<ref>).The limit of the free energy of this process is obtained by Laplace's method once the measure of high points is known, cf. Lemma <ref>. The proof of Lemma <ref> is based on a similar computation of <cit.> for the two-dimensional Gaussian free field based on Kistler's multiscalesecond moment method <cit.>. But first, we need an a priori restriction on the maximum of the process (X_h).The maximum depends on the value of the parameter u as expected from GREM models. With this in mind, we defineγ^⋆= ((1+u)^2 α + (1-α))^1/2 if u≤0,(1+u)α + (1-α) if u>0.Note that the two expressions are equal to 1 at u=0 and that γ^⋆>1 if u> 0, and γ^⋆<1 if u<0. The next lemma bounds the maximum of X_h. For any >0, lim_T→∞(max_h∈[0,1]X_h>(1+)γ^⋆loglog T )=0 .This is a consequence of Lemma <ref> which shows that the large values of X_h(α) are well approximated by points at a distance (log T)^-α. In the case u≤ 0, we use the lemma with α=1. Without loss of generality, suppose that log T is an integer andconsider I_k, k≤log T, a collection of intervals of length (log T)^-1 that partitions [0,1]. Then a simple union bound yields(max_h∈[0,1]X_h>(1+)γ^⋆loglog T )≤∑_k=1^log T(max_h∈ I_kX_h >(1+)γ^⋆loglog T ) .Lemma <ref> applied to X_h then implies(max_h∈[0,1]X_h>(1+)γ^⋆loglog T )≪ (log T) exp(-(1+)^2 ((1+u)^2α + (1-α))loglog T)/(1+u)^2α + (1-α))≤ (log T)^1-(1+)^2 ,which goes to 0 as claimed.In the case u>0, an extra restriction is needed since the large values of X_h(α) are themselves limited.Proceeding as above, without loss of generality, assume that (log T)^α, (log T)^1-α and log T are integers. Consider the collection of intervals J_j, j≤ (log T)^α, that partitions [0,1] into intervals of length (log T)^-α. Each J_j is again partitioned into intervals I_jk, k≤ (log T)^1-α, of length (log T)^-(1-α).Then Lemma <ref> implies(∃ j: max_h∈ J_j X_h(α)>(1+)·αloglog T )→ 0 .Therefore, the probability of the maximum of X_h can be restricted as follows:(max_h∈[0,1]X_h>(1+)γ^⋆loglog T )= (∃ h∈ [0,1]: X_h>(1+)γ^⋆loglog T,X_h(α)≤ (1+)·αloglog T)+(1)≪∑_j,k∑_q=0^(1+)·αloglog T(max_h∈ J_j X_h(α)∈[q,q+1], max_h∈ I_jk X_h(α,1)>(1+)γ^⋆loglog T-(1+u)(q+1)) .The last inequality is obtained by a union bound on the partition (I_jk) and by splitting the values of the maximum of X_h(α) on the range [0,(1+)αloglog T]. (Note that X_h(α) is symmetric thus the maximum is greater than 0 with large probability.) By independence between (X_h(α),h∈[0,1]) and (X_h(α,1),h∈[0,1]), Lemma <ref> can be applied twice to get the following bound on the summand:≪exp(-q^2/αloglog T)·exp(-((1+)γ^⋆loglog T-(1+u)(q+1))^2/(1-α) loglog T) .On the interval [0,(1+)αloglog T], this is maximized at the endpoint q=(1+) αloglog T. (This is where the case u<0 differs, as the optimal q thereis within the interval. See Remark <ref> for more on this.) Putting this back in (<ref>) and summing over j,k, and q finally give the estimate:(max_h∈[0,1]X_h>(1+)γ^⋆loglog T )≪ (loglog T)· (log T)^α· e^-((1+)αloglog T)^2/αloglog T· (log T)^1-α· e^-((1+)(1-α) loglog T)^2/(1-α) loglog T≪ (loglog T)·log T^1-(1+)^2=(1) .This concludes the proof of the lemma. Consider for 0<α≤ 1 and |u|<1 the (normalized) log-measure of γ-high pointsℰ_α,u(γ;T)= 1/loglog Tlog Leb{h∈ [0,1]: X_h> γloglog T} , 0< γ <γ^⋆ .The limit of these quantities in probability can be computed following <cit.>. The limit ℰ_α,u(γ)=lim_T→∞ℰ_α,u(γ;T) exists in probability. We have for u<0,ℰ_α,u(γ) =-γ^2/(1+u)^2 α+(1-α) ,and for u≥ 0, ℰ_α,u(γ) = -γ^2/(1+u)^2 α+(1-α) if γ<γ_c-α -(γ-(1+u)α)^2/(1-α) if γ≥γ_c.for γ_c=(1+u)^2α+(1-α)/(1+u).The dichotomy in the log-measure is due to the fact that for h with values beyond γ_cloglog T, the intermediate values X_h(α) is restricted by the maximal level αloglog T.More precisely, considerℳ_T = Leb{h∈ [0,1]: X_h> γloglog T} ℳ'_T = Leb{h∈[0,1]:(1+u) X_h(α) ≥λloglog T} ℳ”_T = Leb{h∈[0,1]:(1+u) X_h(α) ≥λloglog T,X_h(α,1)≥ (γ-λ)loglog T} .Clearly, we must have ℳ_T”≤ℳ_T.It turns out that ℳ_T” and ℳ_T are comparable for an optimal choice of λ given by, when u<0,λ^⋆ =γ (1+u)^2α/(1+u)^2α+1-α,γ<γ^⋆,and whenu>0,λ^⋆ =γ (1+u)^2α/(1+u)^2α+1-α if 0<γ<γ_c ,(1+u)αif γ_c≤γ<γ^⋆.One can see this at a heuristic level by considering first moments. Since the maximum of X_h is well approximated by the maximum over lattice points spaced (log T)^-1 apart,there should be γ-high points only if (log T)·ℳ_T”≥ 1.Moreover, we have that if ℳ_T'=0, then ℳ_T”=0. And the maximum of X_h(α) is well approximated by the maximum over lattice points spaced (log T)^-α apart,so there should be γ-high points only if (log T)^α·ℳ_T'≥ 1.Since X_h(α) and X_h(α,1) are approximately Gaussian with variance 1/2αloglog T and 1/2(1-α)loglog T, the following should hold approximately log[ (log T)^α·ℳ_T']/loglog T=α-λ^2/(1+u)^2α+(1)log[(log T)·ℳ_T”]/loglog T=1-λ^2/(1+u)^2α-(γ-λ)^2/1-α+(1)Together with conditions (<ref>) and (<ref>), we obtain constraints on the value of λ: α-λ^2/(1+u)^2α≥ 0,1-λ^2/(1+u)^2α-(γ-λ)^2/1-α≥0.By maximizing ℳ_T”, under the constraints (<ref>) and (<ref>), one gets the values (<ref>) and (<ref>) for λ.With Remark <ref> in mind, we are ready to bound the log-measure. Upper bound on the log-measure.For 0<γ<γ^⋆, consider ℳ_T as in (<ref>). We need to show that for >0lim_T→∞(ℳ_T > (log T)^ℰ_α,u(γ)+)=0 .We first prove the easiest cases where u≥ 0 and γ< γ_c, as well as u≤0.Let >0. And write V=1-α +(1+u)^2 α for short. Observe that by Markov's inequality and Fubini's theorem(ℳ_T>(log T)^-γ^2/V+)≤ (log T)^γ^2/V-∫_0^1 (X_h>γloglog T)h =(log T)^γ^2/V-(X_h>γloglog T) ,where we used the fact that the variables X_h, h∈[0,1], are identically distributed.Since(X_h>γloglog T)≪exp(-γ^2loglog T/V) by Equation (<ref>), the claim (<ref>) follows.The case u> 0, γ> γ_c is more delicate as we need to control the values at scale α. For '>0 to be fixed later, note that the same argument as for equation (<ref>) gives( Leb{h∈ [0,1]: X_h(α)>(α+')loglog T}>0 )≤(∃ h∈ [0,1]: X_h(α)>(α+')loglog T )→ 0 .The same hold by symmetry for -X_h(α). This implies(ℳ_T > (log T)^ℰ_α,u(γ)+)=( Leb{h: X_h>γloglog T,|X_h(α)|≤ (α+')loglog T}>(log T)^ℰ_α,u(γ)+) +(1) .It remains to prove that the first term is (1).As in the proof of Lemma <ref>, we consider the partition of [0,1] by intervals J_j, j≤ (log T)^α, and the sub-partition I_jk, k≤ (log T)^1-α. We also divide the interval[-(α+')loglog T, (α+')loglog T] into intervals [q,q+1]. Then by Markov's inequality and the additivity of the Lebesgue measure( Leb{h: X_h>γloglog T,|X_h(α)|≤ (α+')loglog T}>(log T)^ℰ_α,u(γ)+)≤(log T)^-ℰ_α,u(γ)-∑_j,k∑_|q|≤ (α+')loglog T[ Leb{h∈ I_jk:X_h>γloglog T,X_h(α)∈[q,q+1] }]≤ (log T)^-ℰ_α,u(γ)-∑_j,k∑_|q|≤ (α+')loglog T (log T)^-1(X_h(α,1)>γloglog T-(1+u)(q+1),X_h(α)≥ q }) .The last line follows from Fubini's theorem and the fact that Leb(I_jk)=(log T)^-1. The probabilities can be bounded by the Gaussian bound (<ref>)(X_h(α,1)>γloglog T-(1+u)(q+1),X_h(α)≥ q })≪exp(-q^2/αloglog T)·exp(-(γloglog T-(1+u)(q+1))^2/(1-α) loglog T) .It is easily checked that the expression is maximized at q>(α+')loglog T for '. Moreover, at the optimal q=(α+')loglog T in the considered range, the probability equals (1+(1))(log T)^ℰ_α,u(γ). Using this observation to bound the probability for each q in (<ref>), we get( Leb{h: X_h>γloglog T,|X_h(α)|≤ (α+')loglog T}>(log T)^ℰ_α,u(γ)+)≪(log T)^-loglog T =(1) .This finishes the proof of the upper bound. Lower bound on the log-measure.For >0, the goal is to show (ℳ_T> (log T)^ℰ_α,u(γ)-)→ 1as T→∞.This is done using the Paley-Zygmund inequality, which states that for a random variable ℳ≥ 0 and 0≤η_T≤ 1,( ℳ>η _TE[ ℳ]) ≥(1-η _T)^2[ ℳ]^2/[ ℳ^2 ].We will have η_T→ 0, so the main task will be to demonstrate[ ℳ^2]=( 1+( 1 ) )[ ℳ]^2 .This cannot be achieved when ℳ=ℳ_T because of the correlations inX_h. To overcome this problem, we define a modified version of ℳ_T by coarse grainingthe field as described in <cit.>.For K∈ (that will depend eventually on ), assume without loss of generality that {0,1/K,2/K,…, K-1/K,1} is a partition of [0,1] that is a refinement of {0,α,1}. Consider λ<λ^⋆ as defined in (<ref>) and (<ref>), and δ>0 (that will depend on ).Define theevents for the K-level coarse increments: 𝒥_h(m)={ (1+u)X_h(m-1 /K ,m /K)≥(1+δ)λloglog T/α K} if m=1,… ,α K,{X_h(m-1 /K ,m /K)≥(1+δ)(γ-λ)loglog T/(1-α )K} if m=α K+1,…, K. Moreover, define the sets𝒜={h: 𝒥_h(m)occurs ∀ m=2,…,K}ℬ={h: (1+u)X_h(1/K)≤-δ/2loglog T} . Note that if h∈𝒜, by adding up the inequalities in𝒥_h(m), we have for K large enough, X_h-(1+u)X _h( 1 /K)≥(1+δ)(γ- λ/α K)loglog T≥( γ+δ/2)loglog T . Therefore this implies the inclusion𝒜⊂{h∈ [0,1]: X_h ≥γloglog T}∪ℬ ,so that ℳ_T≥Leb(𝒜) -Leb(ℬ). Equation (<ref>) and Fubini's theorem shows that [Leb(ℬ)]≪(log T)^-δ^2K/4(1+u)^2. For K large enough, Markov's inequality then implies(Leb{h∈ [0,1]:(1+u)X_h( 1 /K)≤-δ/2loglog T}≤ (log T)^ℰ_α,u(γ)-ε)→ 1.The proof of (<ref>) is then reduced to show(Leb(𝒜)>2(log T)^ℰ_α,u(γ)-ε) = (Leb(𝒜)>η_T[Leb(𝒜)]) → 1 ,where η_T is defined by 2(log T)^ℰ_α,u(γ)-ε=η_T[Leb(𝒜)].Following (<ref>), we first show η_T→0. By (<ref>), Fubini's theorem,and independence,[Leb(𝒜)]=∫_0^1∏ _m=2^K( 𝒥_h(m) )h =∏_m=2^K( 𝒥_h(m) ),since the X_h's are identically distributed. By Proposition <ref>, (𝒥_h(m)) ≫(log T)^- (1+δ)^2λ^2 /α^2 K(1+u)^2 +(1) when m=1,… ,α K, (log T)^-(1+δ)^2 (γ-λ)^2/(1-α )^2K+(1) when m=α K+1,…, K. Thus, by (<ref>) and (<ref>), we have[ Leb(𝒜)] ≫(log T)^ -(1+δ)^2λ^2 /α (1+u)^2-(1+δ)^2 (γ-λ)^2/(1-α )(log T)^(1+δ)^2λ^2 /α^2 K(1+u)^2+(1) . We can take λ close enough to λ^⋆, δ small enough, and K large enough so that[ Leb(𝒜)] ≫(log T)^ - λ^⋆^2 /α (1+u)^2- (γ-λ^⋆)^2/(1-α )-/2=(log T)^ℰ_α,u(γ)-/2 ,where we replace the value of λ^⋆ of (<ref>) and (<ref>). This shows that η_T→ 0. Observe that, we also have the reverse inequality [ Leb(𝒜)] ≪(log T)^ℰ_α,u(γ)+/2 ,using (<ref>) instead of Proposition <ref>.It remains to show (<ref>).By independence of increments and Fubini's theorem, we have[Leb(𝒜)^2]=∫ _0^1∫ _0^1∏ _m=2^K(𝒥_h(m)∩𝒥_h'(m))h h' .We split the integral into four integrals: I for |h-h'|>(log T)^-1/2K, II for (log T)^-1/K≤ |h-h'|≤(log T)^-1/2K, III for(log T)^-r/K<|h-h'|≤(log T)^-(r-1)/K, r=2,… K, and IV for |h-h'|≤ (log T)^-1. We will showthat I=[Leb(𝒜)]^2( 1+(1) ) and the others ([Leb(𝒜) ]^2).* For II, note that Leb^× 2{(h,h'):(log T)^-1/K≤ |h-h'|≤(log T)^-1/2K}≪(log T)^-1/2K. Moreover, by (<ref>) and Proposition <ref>,we have (𝒥_h(m)∩𝒥_h'(m))≪ (𝒥_h(m))^2. This implies II= ([Leb(𝒜) ]^2). * For IV, note that clearly (𝒥_h(m)∩𝒥_h'(m))≤ (𝒥_h(m)). Thus, IV≪ (log T)^-1[Leb(𝒜)]. Using (<ref>) and the fact that 1+ ℰ_α,u(γ)>0 for γ<γ^⋆, one gets IV= ([Leb(𝒜) ]^2). * For I, note that Leb^× 2{(h,h'): |h-h'|>(log T)^-1/2K}=1+(1).Moreover, by Proposition <ref>,(𝒥_h(m)∩𝒥_h'(m))=(1+(1)) (𝒥_h(m))^2. This implies I= (1+(1))[Leb(𝒜) ]^2). * For III, the integral is a sum over r=2,…,K of integrals of pairs with (log T)^-r/K<|h-h'|≤(log T)^-(r-1)/K. The measure of this set is ≪ (log T)^-(r-1)/K. For fix r, the integrand is∏_m=2^K (𝒥_h(m)∩𝒥_h'(m))≤∏_m=2^r (𝒥_h(m))∏_m=r+1^K (𝒥_h(m)∩𝒥_h'(m))≪∏_m=2^r (𝒥_h(m))∏_m=r+1^K (𝒥_h(m))^2 ,where the last line follows by (<ref>) and Proposition <ref>. Putting all this together and factoring the square of the one-point probabilities, one getsIII≪[Leb(𝒜) ]^2 ∑_r=2^K(log T)^-(r-1)/K∏_m=2^r ((𝒥_h(m)))^-1 .We show ∏_m=2^r ((𝒥_h(m)))^-1<(log T)^(r-1)/K uniformly in T.This finishes the proof since the sum is then the tail of a convergent geometric series. In the case u<0, since λ<λ^⋆, and (1+δ)γ<γ^⋆ for δ small, we have by (<ref>),(𝒥_h(m))^-1≪ (log T)^λ^⋆^2/α^2K(1+u)^2 if m≤α K(log T)^(γ^⋆-λ^⋆)^2/(1-α)^2K if m=α K+1,…,K.By the definition of λ^⋆ and γ^⋆=V^1/2, this implies∏_m=2^r ((𝒥_h(m)))^-1≪ (log T)^(1+u)^2/Vr-1/K if r≤α K(log T)^α(1+u)^2/V+1/Vr-α K/K if r=α K+1,…,K. Since u<0, it is straightforward to check that the exponent is smaller than r-1/K as claimed. The case u≥ 0 is done similarly by splitting into two cases γ_c≤γ<γ^⋆ and 0<γ<γ_c. We omit the proof for conciseness.We now have all the results to finish the proof of Proposition <ref> using Laplace's method.We first prove the limit in probability. The convergence in L^1, and in particular the convergence of the expectation, will be a consequence of Lemma <ref> below.For fixed >0 and M∈, consider γ_j=j(1+)/Mγ^⋆ 0≤ j≤ M ,and the eventA=⋂_j=1^M {(log T)^ℰ_α,u(γ_j)-≤ Leb{h: X_h> γ_j loglog T}≤ (log T)^ℰ_α,u(γ_j)+} ⋂{ Leb{h: X_h> γ_Mloglog T}=0} .By Lemma <ref> and Lemma <ref>, we have that (A^c)→ 0 as T→∞. It remains to prove that the free energy is close to the claimed expression on the event A.On one hand, the following upper bound holds on A:∫_0^1 expβX_h h ≤∑_j=1^M ∫_0^1 expβX_h _{(log T)^γ_j-1<e^X_h≤(log T)^γ_j}h +∫_0^1 expβX_h _{e^X_h<1}h≤∑_j=1^M (log T)^βγ_j+ℰ_α,u(γ_j-1)++1 .On the other hand, we have the lower bound∫_0^1 expβX_h h ≥∑_j=1^M ∫_0^1 expβX_h _{(log T)^γ_j-1<e^X_h≤(log T)^γ_j}h≥∑_j=1^M (log T)^βγ_j-1+ℰ_α,u(γ_j)- .Altogether, this impliesmax_1≤ j≤ M{βγ_j-1+ℰ_α,u(γ_j)-}≤log∫_0^1 expβX_h h/loglog T≤max_1≤ j≤ M{βγ_j+ℰ_α,u(γ_j-1)+}+(1) .In particular, by continuity of ℰ_α,u(γ), we can pick M large enough depending onand T large enough so that|log∫_0^1 expβX_h h/loglog T -max_γ∈ [0,γ^⋆]{βγ+ℰ_α,u(γ)}| ≤ 2 .As mentioned above, since (A^c)→ 0 as T→∞, this proves the convergence in probabilitylim_T→∞log∫_0^1 expβX_h h/loglog T= max_γ∈ [0,γ^⋆]{βγ+ℰ_α,u(γ)} .It remains to check that the right side has the desired form. Let V=(1+u)^2 α+(1-α).If u<0, the optimal γ is β V/2 whenever β V/2<γ^⋆, i.e., β<2/V^1/2. If β≥2/V^1/2, then the optimal γ is simply γ^⋆. Therefore, we havemax_γ∈ [0,γ^⋆]{βγ+ℰ_α,u(γ)}=β^2V/4 if β<2/V^1/2 β V^1/2-1 if β≥2/V^1/2 . If u≥ 0, the optimal γ is β V/2 if γ <γ_c, i.e., β <2/(1+u). If γ>γ_c, then the optimal γ is (1+u)α+β(1-α)/2 until it equals γ^⋆. This happens at β≥ 2. Putting all this together, we obtain that max_γ∈ [0,γ^⋆]{βγ+ℰ_α,u(γ)}= β^2((1+u)^2 α+(1-α))/4 if β<2/(1+u) β (1+u)α -α + β^2(1-α)/4 if 2/(1+u)≤β<2 β((1+u)α+(1-α))-1 if β≥2 .This corresponds to the expression in Proposition <ref> expressed in terms of (<ref>). The sequence of random variables (1/loglog Tlog∫_0^1 exp(β (X_h+uX_h(α))h)_T>1is uniformly integrable. In particular, the convergence in probability of the sequence is equivalent to the convergence in L^1. Write for shortf_T =(loglog T)^-1log∫_0^1 expβX_hh .We need to show that for any ε>0, there exists C large enough so that uniformly in T,[|f_T| _{|f_T)|>C}]<ε .It is easy to check that[|f_T| _{|f_T|>C}]=∫_C^∞(f_T>y) y + C(f_T>C) + ∫_-∞^-C(f_T<y) y + C(f_T<-C) .Therefore, it remains to get a good control on the right and left tail of f_T. For the right tail, observe that by Markov's inequality(f_T>y) =(∫expβX_hh>(log T)^y) ≤ (log T)^-y [∫expβX_hh] .Using Proposition <ref> and Fubini's theorem, we get(f_T>y) ≪ (log T)^((1+u)^2α+(1-α))β^2/4-y .This implies∫_C^∞(f_T>y) y + C(f_T>C)≪(log T)^((1+u)^2α+(1-α))β^2/4-C/loglog T+C(log T)^((1+u)^2α+(1-α))β^2/4-C .It suffices to take C>((1+u)^2α+(1-α))β^2/4 for this to be uniformly small in T. The left tail is bounded the same way after noticing that by Markov's and Jensen's inequalities,(f_T<-y) =(∫expβX_hh<(log T)^-y)≤ (log T)^-y [(∫expβX_hh)^-1]≤(log T)^-y [∫exp-βX_hh]≪ (log T)^((1+u)^2α+(1-α))β^2/4-y .These estimates imply that [|f_T| _{|f_T)|>C}] can be made arbitrarily small in (<ref>) by taking C larger than ((1+u)^2α+(1-α))β^2/4. plain
http://arxiv.org/abs/1706.08462v4
{ "authors": [ "Louis-Pierre Arguin", "Warren Tai" ], "categories": [ "math.PR", "60G70, 11M06" ], "primary_category": "math.PR", "published": "20170626163250", "title": "Is the Riemann zeta function in a short interval a 1-RSB spin glass ?" }
[email protected] [email protected]^1Amherst Center for Fundamental Interactions, Department of Physics, University of Massachusetts Amherst, MA 01003, USA ^2Kellogg Radiation Laboratory, California Institute of Technology, Pasadena, CA 91125 USAWe derive present constraints on, and prospective sensitivity to, the electric dipole moment (EDM) of the top quark (d_t) implied by searches for the EDMs of the electron and nucleons. Above the electroweak scale v, the d_t arises from two gauge invariant operators generated at a scale Λ≫ v that also mix with the light fermion EDMs under renormalization group evolution at two-loop order. Bounds on the EDMs of first generation fermion systems thus imply bounds on |d_t|. Working in the leading log-squared approximation, we find that the present upper bound on |d_t| is roughly 10^-19 e cm for Λ = 1 TeV, except in regions of finely tuned cancellations that allow for |d_t| to be up to fifty times larger. Future d_e and d_n probes may yield an order of magnitude increase in d_t sensitivity, while inclusion of a prospective proton EDM search may lead to an additional increase in reach.Top Down Electroweak Dipole Operators Michael Ramsey-Musolf^1,2 December 30, 2023 ===================================== § INTRODUCTION The search for physics beyond the Standard Model (BSM) lies at the forefront of both high- and low-energy physics.The properties of the top quark constitute a particularly interesting meeting ground for the two regimes. Theoretically, top quarks may provide a unique window into BSM physics, given that the top Yukawa coupling is large compared to all other Standard Model (SM) fermions. Experimentally, top quarks can be copiously produced in high energy proton-proton collisions, while their indirect effects – generated via quantum loops – can be pronounced. Indeed, the breaking of custodial SU(2) symmetry by the top quark-bottom quark mass splitting has a significant impact on the interpretation of electroweak precision tests at the loop level. This sensitivity provided an early handle on the value of the top quark mass and, after the discovery of the top quark, an important test of the self-consistency of the SM at the level of quantum corrections.The CP properties of top quark interactions is a topic of on-going interest. In the context of electroweak baryogenesis (EWBG) <cit.>, CP-violating (CPV) interactions of the top quarkwith an extended scalar sector can yield the observed cosmic baryon asymmetry <cit.>.The presence of BSM CPV in the top quark sector may also appear in the guise of a top electric dipole moments (EDM) and chromo-electric dipole moment (CEDM), two of a number of possible higher dimension top quark operators. Since the top (C)EDM is chirality changing, it can be significantly enhanced compared to light fermion (C)EDMs by the large top Yukawa coupling.While direct collider probes of the (C)EDM havebeen studied extensively <cit.>, a complementary way to access the top EDM (d_t) and CEDM (d̃_t) is throughtheir indirect effects, such as the resulting, radiatively-induced light fermion EDMs. This possibility has been explored in several studies <cit.>. The most powerful limit on d_t appears to result from the limit on the EDM of the electron |d_e|<8.7× 10^-29 e  cm (90%  C.L.) <cit.> (see also the recent result using HfF^+, |d_e|<1.3× 10^-28 e  cm (90%  C.L.)<cit.>), implying |d_t|<5.0× 10^-20 e  cm (90%  C.L.) <cit.>.In this study, we focus on d_t. If it is generated by BSM physics at a scale Λ that lies well above the electroweak scale v=246 GeV, then it is likely that two dimension-six CPV dipole operators emerge, coupling respectively to the U(1)_Y and SU(2)_L gauge bosons. We henceforth denote these operators as O_tB and O_tW, respectively. Denoting their coefficients as C_tB(W)/Λ^2, we note that the presence of CPV implies that the dimensionless Wilson coefficients C_tB(W) are, in general, complex.After electroweak symmetry breaking (EWSB), one linear combination yields d_t at tree-level. The operators O_tB and O_tW will also radiatively generate all other light fermion EDMs at two-loop order. Bounds on d_e as well as on the neutron EDM, d_n, then yield (in principle) complementary constraints onC_tB(W), with corresponding implications for d_t. In what follows, we perform an explicit two-loop computation of the light fermion EDMs induced by O_tB(W), retaining the leading ln^2 (Λ/v) contributions. After translating the light quark EDMs into d_n, we derive constraints on the C_tB(W)/Λ^2, along with the corresponding implications for d_t, using the present neutron and electron EDM bounds. We will make no a priori assumptions about the relationships between the C_tB and C_tW at the scale Λ, endeavoring to be as model-independent as possible. In these respects, our analysis complements the earlier studies in Refs. <cit.>. In this context, we also find that there exist regions where cancellations between these two operators can considerably weaken the generic constraints, albeit with some degree of fine-tuning. Looking ahead, we illustrate the potential reach of next generation electron and nucleon EDM searches. § EFFECTIVE OPERATORS To set the conventions for our analysis, we start with the CPV effective Lagrangian generated by BSM physics at the scale Λ <cit.>:L_ eff= -1/Λ^2∑_f=e,u,d,t(g_1/√(2)C_fB O_fB+g_2/√(2)C_fW O_fW+ h.c.)+1/Λ^2∑_X=B,WC_HX̃ O_HX̃+1/Λ^2∑_F=L,Q,f=e,d,t(C^(i)_FfF'f' O_FfF'f'^(i)+ h.c.),where the first line indicates the dipole operators O_eB =L̅σ^μνe_RHB_μν,O_eW =L̅σ^μνe_Rτ^AHW^A_μν,O_tB =Q̅σ^μνt_RH̃B_μν,O_tW =Q̅σ^μνt_Rτ^AH̃W^A_μν.The second and third lines represent gauge-Higgs and 4-fermi operatorsO_HB̃ =g^2_1H^†HB̃_μνB^μν,O_HW̃ =g^2_2H^†HW̃^A_μνW^Aμν,O_HW̃B =g_1g_2H^†τ^AHW̃^A_μνB^μν,andO_ℓ e qt^(3) =(L̅^aσ^μνe_R)ϵ_ab(Q̅^bσ_μνt_R),O_qtqd^(1) =(Q̅^at_R)ϵ_ab(Q̅^bd_R),O_qtqd^(8) =(Q̅^aτ^At_R)ϵ_ab(Q̅^bτ^Ad_R).Here, L and Q are the lepton and quark doublets, e_R (t_R) is the right-handed electron (top quark), τ^A is the Pauli matrix, and H is the Higgs doublet with H̃=iτ^2H^*;B_μν and W^A_μν are the U(1)_Y and SU(2)_L field strengths, respectively; andg_1 and g_2 represent their gauge couplings; X̃ is defined as ϵ_μναβX^αβ; a and b are the SU(2)_L indices.The dipole operators for the up (down) quarkO_uB,uW ( O_dB,dW) are also given by the same structure as O_tB,tW ( O_eB,eW). For a listing of the complete set of dimension-six CPV operators, see, e.g., <cit.>.After EWSB, the dipole operators in Eq. (<ref>) produce the EDMs L_ eff∋ -i/2∑_f=e,u,d,td_ff̅σ^μνγ_5fF_μν,with F_μν being the photon field strength tensor. The coupling d_f is related to the Wilson coefficients of the operatorsd_e(d) =ev/Λ^2{ Im(C_e(d)B)- Im(C_e(d)W) },d_t(u) =ev/Λ^2{ Im(C_t(u)B)+ Im(C_t(u)W) }.The opposite relative sign between the C_fB and C_fW for up- and down-type fermions is due to their isospin projection quantum numbers. To facilitate comparison with the experimental EDM limits, it is useful to express a factor of ev/Λ^2 with units of fm[Since our definitions of the dipole operators are accompanied with a factor of 1/√(2), the coefficient of ev/Λ^2 becomes smaller that in <cit.>.]ev/Λ^2=e/v(v/Λ)^2≃ (7.8× 10^-4 e  fm)(v/Λ)^2.In addition to the bounds on |d_e| quoted above[The limit is obtained by assuming that the ThO EDM does not receive a contribution from semileptonic four-fermion interactions.], we consider the constrains implied by the light-quark contributions to d_n[Although the EDM of the strange quark and chromo EDMs also contribute to the neutron EDM, we do not include them, here.], whose experimental limit is |d_n|<3.0× 10^-26 e  cm (90%  C.L.) <cit.>.As we discuss below, the d_e-contributions from O_tB and O_tW may cancel in some finely-tuned portions of parameter space. Inclusion of the d_n constraints may provide a complementary probe of this cancellation region". Outside of this region, present EDM limits imply an upper bound on |d_t|≲ 10^-19 e  cm, depending on the value of Λ.Looking to the future, next generation EDM searches may reach the levels of sensitivity: |d_e|<1.0× 10^-29 e  cm and |d_n|<3.0× 10^-28 e  cm <cit.>, implying an order of magnitude increase in the sensitivity to d_t.In addition, efforts are underway to develop storage ring proton EDM search with sensitivity 10^-29 e  cm <cit.>. For the scenario considered here, the constraints from diamagnetic atom EDM searches, such as that of the ^199Hg atom <cit.> can be comparable to those from d_n. Although the latest ^199Hg result yields an upper bound on |d_n| that is roughly two times stronger than the direct limit, we expect the latter to become considerably more stringent with the next generation experiments. Consequently, we will use the direct d_n bounds in what follows.§ LOOP CALCULATIONS The existence of the top quark dipole operators in Eq. (<ref>) at a renormalization scale μ=Λ will lead to non-vanishing electron and light-quark dipole operators through the two-loop Barr-Zee diagrams of Fig. <ref>. This effect corresponds to the electroweak operator mixing in the renormalization group evolution (RGE) from Λ to v, thereby relating the Wilson coefficients of the electron and light quark dipole operators at the EW scaleto C_tB(Λ) and C_tW(Λ). Below the scale v, we integrate out the heavy SM degrees of freedom (t, W, Z, and h), and the dominant contributions when running to the low-energy scale relevant to experiment involve SU(3)_C interactions.The upper two diagrams induce the up quark EDM, the lower two diagrams yield the electron and down quark EDMs. This assignment can be understood by considering which Higgs field is chosen as an external particle. Each diagram has two opposite fermion flows (corresponding to distinct Wick contractions), as well as topologies involving crossing of the scalar and gauge boson lines.In addition to the overall logarithmic divergence associated with these diagrams, logarithmically divergent one-loop sub-graphs associated with the upper and lower loops in Fig. <ref> correspond to mixing betweenO_tB,W and 𝒪_HB̃, W̃, W̃B and O^(3, 1,8)_ℓ e qt, qtqd, respectively. Consequently, one must include the counter terms associated with these operators, as shown in Fig. <ref>. We note that the right diagram in Fig. <ref> results from only the subgraph in the lower right diagram of Fig. <ref>, because only this subgraph has a divergence. We perform the computation using dimensional regularization in d=4-ϵ dimensions and renormalization in the minimal subtraction (MS) scheme. For purposes of this analysis, wherein we seek to obtain the order of magnitude constraints on C_tB(W)(Λ), it is useful to observe that the EW running yields an enhancement factor of ln^2(Λ/v), as well as sub-dominant ln(Λ/v) terms.The anomalous dimension associated with the latter is renormalization scheme-dependent and introduce an additional dependence on the Wilson coefficients C_HW̃, etc.. Here, we retain only the leading ln^2 contributions, deferring a treatment of the sub-leading log terms to a future publication <cit.>. (For analogous ln^2 contributions in other contexts, see, e.g. Refs. <cit.>.)In the leading ln^2-approximation, the resulting Wilson coefficients for the light fermion (f=e,u,d) dipole operators at the scale v areC_fB(v) = -1/2( A_fC_tB+B_fC_tW)ln^2(Λ/v),C_fW(v) =-1/2( D_fC_tB+E_fC_tW)ln^2(Λ/v),where we assume thatC_fB,fW(Λ)=0. The coefficients of A_f, B_f, D_f and E_f for f=e and d are given byA_f = Y_f[-6(Y_F+Y_f)(Y_Q+Y_t)g^2_1+3/2g^2_2 ],B_f = Y_f 6(Y_Q+Y_t)g^2_2,D_f = Y_f 2(Y_F+Y_f)g^2_1,E_f = Y_f[2(Y_F +Y_f)(Y_Q+Y_t)g^2_1-5/2g^2_2 ],where F=L or Q for f=e or d. These of the up quark are given byA_u =- Y_u[4(Y_Q+Y_u)(Y_Q+Y_t)g^2_1+3/2g^2_2 ],B_u =- Y_u 3(Y_Q+Y_t)g^2_2,D_u =- Y_u(Y_Q+Y_t)g^2_1,E_u =- Y_u[2(Y_Q +Y_u)(Y_Q+Y_t)g^2_1+g^2_2 ],where Y_f=N_Cy_fy_t/(4π)^4 with N_C=3 and the hyper charges Y_L=-1/2, Y_e=-1, Y_Q=1/6, Y_t(u)=2/3 and Y_d=-1/3. Y_e is roughly an order of magnitude smaller than Y_u,d due to the Yukawa coupling. Using these results, it is straightforward to obtain thelight fermion EDMs as defined in Eq. (<ref>):d_e(d)=-e/2v(v/Λ)^2ln^2(Λ/v)×[(A_e(d)-D_e(d)) Im(C_tB)+(B_e(d)-E_e(d)) Im(C_tW) ]d_u=-e/2v(v/Λ)^2 ln^2(Λ/v)×[(A_u+D_u) Im(C_tB)+(B_u+E_u) Im(C_tW) ].In general, the d_f depend more strongly on Im(C_tW) than on Im(C_tB), a feature due in part to the dependence on g_2.Specifically, the Im(C_tB) contribution depends on g^2_2 comes from only A_f, while both B_f and E_f contain g^2_2 contributions. The dependence on Λ comes from (v/Λ)^2 and log^2(Λ/v) factors. When translating the limits on d_e(n) into bounds on |d_t|, the (v/Λ)^2-dependence that is common to all EDMs. To assess the impact of the remaining logarithmic dependence, in our numerical analyses we consider two benchmark choices: Λ = 1 and 10 TeV.The ratio ln^2(Λ=1  TeV/v)/ln^2(Λ=10  TeV/v) is about 0.14. For the light quark EDMs, we take into account the QCD contributions to their evolution from the EW scale to the low-energy scale <cit.>. As clearly discussed in <cit.>, the effect suppresses the dipole operators at the low-energy. We choose the low-energy scale Λ_had = 2 GeV in order to match onto the lattice QCD computation of the resulting neutron EDM given in <cit.>. We obtain d_q(Λ_had) = 0.85 d_q(v). § RESULTS It is useful to consider the constraints on (v/Λ)^2Im(C_tB(W)) since the EDM definitions absorb the leading (v/Λ)^2 factor is noted above. The present and prospective bounds are shown in Figs. <ref>-<ref>. In addition to considering the two benchmark choices for Λ, we also consider two cases, corresponding toIm(C_tB) and Im(C_tW) having the same (positive) signor opposite sign. The latter exhibits the possibility of finely-tuned cancellations.Figure <ref> shows the present constraints for the same sign case for the two different benchmark choices for Λ.The blue and green shaded regions are excluded by the limits in d_e and d_n, respectively. The black contours represent values of constant top quark EDM.For Λ=1 (10) TeV, we find that|d_t| ≲ 1.3× 10^-19  (1.8× 10^-20) e cm in the limit of Im(C_tW)=0. Note that the maximum value for Λ = 10 TeV is roughly 0.14 times smaller than for Λ = 1 TeV, as expected from the ln^2 dependence on (v/Λ)^2. We observe that our upper bound for Λ= 1 TeV is somewhat larger than obtained by the authors of Ref. <cit.>, who assumed in their numerical analysis that only one linear combination of Im(C_tB) and Im(C_tW) corresponding to non-vanishing d_t exists at the scale Λ. Although the computation in Refs. <cit.> was performed using the broken phase basis, we speculate that the difference in our limits in d_t results primarily from the assumptions about the Im(C_tB(W))(Λ) . The prospective impact of future EDM searches is illustrated in Fig. <ref>, where we assume 90% C.L. limitsof |d_n|=3.0× 10^-28 e  cm and |d_e|=1.0× 10^-29 e  cm.For the same sign case, we see that the prospective constraint from d_e would still be stronger than from d_n. Naïvely, one would expect the impact of future experiments with these sensitivities to be comparable, since the light fermion EDMs scale linear with the fermion masses and the ratio of the light quark and electron EDMs is roughly a factor of ten. The somewhat stronger d_e sensitivity results from a factor of 3 difference in the future sensitivities and the suppression of the light quark EDMs due to the QCD evolution from the weak to hadronic scales. The resulting prospective bound on d_t for Λ = 1 (10) TeV is |d_t|≲1.5× 10^-20 (2.1×10^-21) e cm. We also include the possibility of a future proton EDM search, with sensitivity |d_p|=1.0× 10^-29 ecm indicated by the orange contour. Should a search with this sensitivity be realized, a top quark EDM of order 10^-20 (21) for Λ = 1 (10) TeV could be probed.Next, we consider the opposite sign case, with present and prospective constraints indicated in Figs. <ref> and <ref>, respectively. Here, the situation is more subtle than for the same sign case, as there exist regions where cancellations between Im(C_tB) and Im(C_tW) can lead to the absence of any constraint from d_e. The present d_n bounds are not yet sufficiently strong to probe this cancellation region" for d_t≲ 10^-18 (19) e cm for Λ=1 (10) TeV.Although the existence of this loophole admittedly requires a degree of fine tuning, a similar possibility of canceling contributions has been noted elsewhere in the case of the minimal supersymmetric SM and proposed as a possible solution to theSUSY CP problem" <cit.>. Outside of this region, the present upper bound on d_t is the same as for the same sign case. As seen in Fig. <ref>, the future bound of d_n closes the loophole and yields of |d_t|≲1.0× 10^-19 (20) e cm for Λ=1 (10) TeV.On the other hand, the electron EDM with the future sensitivity plays a complementary role that covers the region where |d_n|=0.The prospective, future proton EDM experiment gives a sensitivity to d_t with a similar order of magnitude, perhaps increasing the reach byfactor of two. We summarize the present and future limits on d_t in Table <ref> and <ref>. § CONCLUSION AND DISCUSSIONS Due to its sizable Yukawa coupling, the top quark provides one of the most powerful windows into BSM physics. The top quark EDM is particularly interesting because it is sensitive to possible new sources of CPV and because one generally expects it to be enhanced relative to the light fermion EDMs by the ratio of the respective Yukawa couplings.Above the EW scale v, the top EDM originates from two gauge-invariant operators, O_tB and O_tW, that appear at the BSM scale Λ. These operators also induce light fermion EDMs at the two-loop level. Consequently, the stringent bounds on systems involving first generation fermion EDMs, including paramagnetic atoms and polar molecules, neutrons, and diamagnetic atoms, imply strong constraints on O_tB and O_tW. By combining the results from these systems involving light fermions, one obtains tight bounds on d_t. The prospects for obtaining even greater sensitivity with future EDM experiments are promising.The resulting present constraints and prospective sensitivities indicated in Tables <ref> and <ref> imply that |d_t| is smaller than ∼ 10^-19 e cm, except in the presence of finely tuned cancellations between O_tB and O_tW, allowing for a top EDM up to ∼ 50 times larger. Next generation searches for the EDMs of the electron and neutron could yield up to a factor of ten increase in sensitivity, while a storage ring search for the proton EDM with sensitivity |d_p|∼ 10^-29 e cm could lead to an additionalsensitivity increase. To the best of our knowledge, the d_t-reach of these experiments will exceed those of direct probes at the LHC.Given these prospective sensitivities, it is important to bear in mind the opportunities for refined theoretical computations. In this work we have retained only the leading log-squared contribution to the RGE of O_tB and O_tW fromΛ to v. The impact of subleading logarithmic contributions will be analyzed in a forthcoming publication <cit.>. From the low-energy perspective, there exists room for refinements of the d_n computations. While the uncertainties associated with the up- and down-quark EDMs enter at the 10% level <cit.>, those associated with the strange quark (not included in our study here) are considerably larger <cit.>. In addition, BSM scenarios that induce O_tB and O_tW may also give rise to the corresponding CPV gluonic operators (CEDMs), a topic for which the phenomenology is considerably richer and the theoretical hadronic and nuclear uncertainties correspondingly more challenging. In that context, the interplay with LHC and future collider probes may be particularly enlightening.We are grateful to Jordy de Vries and Adrian Signer for useful discussions and comments. We also thankPatrick Draper and Hiren Patel for having fruitful discussions. KF thanks Natsumi Nagata and Eibun Senaha for valuable discussions. MJRM thanks Haipeng An, Mark B. Wise, and Yue Zhang for several helpful conversations. This work was supported in part under U.S. Department of Energy contract DE-SC0011095.99ewbg V. A. Kuzmin, V. A. Rubakov and M. E. Shaposhnikov,Phys. Lett. B 155 (1985) 36.For reviews on electroweak baryogenesis, see A. G. Cohen, D. B. Kaplan and A. E. Nelson,Ann. Rev. Nucl. Part. Sci.43 (1993) 27; M. Quiros,Helv. Phys. Acta 67 (1994) 451;V. A. Rubakov and M. E. Shaposhnikov, Usp. Fiz. Nauk 166 (1996) 493;K. Funakubo,Prog. Theor. Phys.96 (1996) 475; M. Trodden,Rev. Mod. Phys.71 (1999) 1463; W. Bernreuther,Lect. Notes Phys.591 (2002) 237;J. M. Cline,[arXiv:hep-ph/0609145]; D. E. Morrissey and M. J. Ramsey-Musolf,New J. Phys.14, 125003 (2012); T. Konstandin,Phys. Usp.56 (2013) 747[Usp. Fiz. Nauk 183 (2013) 785]. Tulin:2011wiS. Tulin and P. Winslow,Phys. Rev. D 84, 034013 (2011) Cline:2011mmJ. M. Cline, K. Kainulainen and M. Trott,JHEP 1111, 089 (2011) Jiang:2015cwaM. Jiang, L. Bian, W. Huang and J. Shu,Phys. Rev. D 93, no. 6, 065032 (2016) Huang:2015izxF. P. Huang, P. H. Gu, P. F. Yin, Z. H. Yu and X. Zhang,Phys. Rev. D 93, no. 10, 103515 (2016) Kobakhidze:2015xlzA. Kobakhidze, L. Wu and J. Yue,JHEP 1604, 011 (2016) Fuyuto:2017ewjK. Fuyuto, W. S. Hou and E. Senaha,arXiv:1705.05034 [hep-ph]. Gupta:2009wuS. K. Gupta, A. S. Mete and G. Valencia,Phys. Rev. D 80, 034013 (2009) Hayreter:2015rykA. Hayreter and G. Valencia,Phys. Rev. D 93, no. 1, 014020 (2016) Bernreuther:2015ynaW. Bernreuther, D. Heisler and Z. G. Si,JHEP 1512, 026 (2015) Rindani:2015vyaS. D. Rindani, P. Sharma and A. W. Thomas,JHEP 1510, 180 (2015) Hayreter:2014hhaA. Hayreter and G. Valencia,Nucl. Part. Phys. Proc.273-275, 775 (2016) Hioki:2013hvaZ. Hioki and K. Ohkuma,Phys. Rev. D 88, 017503 (2013) Bernreuther:2013agaW. Bernreuther and Z. G. Si,Phys. Lett. B 725, 115 (2013) Erratum: [Phys. Lett. B 744, 413 (2015)] Hayreter:2013kbaA. Hayreter and G. Valencia,Phys. Rev. D 88, 034033 (2013) Baumgart:2012ayM. Baumgart and B. Tweedie,JHEP 1303, 117 (2013) Biswal:2012drS. S. Biswal, S. D. Rindani and P. Sharma,Phys. Rev. D 88, 074018 (2013) Hioki:2012vnZ. Hioki and K. Ohkuma,Phys. Lett. B 716, 310 (2012) Choudhury:2012npD. Choudhury and P. Saha,JHEP 1208, 144 (2012)Aguilar-Saavedra:2014igaJ. A. Aguilar-Saavedra, B. Fuks and M. L. Mangano,Phys. Rev. D 91, 094021 (2015) Chien:2015xhaY. T. Chien, V. Cirigliano, W. Dekens, J. de Vries and E. Mereghetti,JHEP 1602, 011 (2016)Baur:2004uwU. Baur, A. Juste, L. H. Orr and D. Rainwater,Phys. Rev. D 71, 054013 (2005) Baur:2006ckU. Baur, A. Juste, L. H. Orr and D. Rainwater,Nucl. Phys. Proc. Suppl.160, 17 (2006) Bouzas:2012avA. O. Bouzas and F. Larios,Phys. Rev. D 87, no. 7, 074015 (2013) Fael:2013iraM. Fael and T. Gehrmann,Phys. Rev. D 88, 033003 (2013) Fayazbakhsh:2015xbaS. Fayazbakhsh, S. T. Monfared and M. Mohammadi Najafabadi,Phys. Rev. D 92, no. 1, 014006 (2015) Etesami:2016rwuS. M. Etesami, S. Khatibi and M. Mohammadi Najafabadi,Eur. Phys. J. C 76, no. 10, 533 (2016)CorderoCid:2007ucA. Cordero-Cid, J. M. Hernandez, G. Tavares-Velasco and J. J. Toscano,J. Phys. G 35, 025004 (2008) Kamenik:2011dkJ. F. Kamenik, M. Papucci and A. Weiler,Phys. Rev. D 85, 071501 (2012) Erratum: [Phys. Rev. D 88, no. 3, 039903 (2013)] Cirigliano:2016njnV. Cirigliano, W. Dekens, J. de Vries and E. Mereghetti,Phys. Rev. D 94, no. 1, 016002 (2016) Cirigliano:2016nynV. Cirigliano, W. Dekens, J. de Vries and E. Mereghetti,Phys. Rev. D 94, no. 3, 034031 (2016) Baron:2013ejaJ. Baron et al. [ACME Collaboration],Science 343, 269 (2014) Cairncross:2017fipW. B. Cairncross et al.,arXiv:1704.07928 [physics.atom-ph]. Buchmuller:1985jzW. Buchmuller and D. Wyler,Nucl. Phys. B 268, 621 (1986).Grzadkowski:2010esB. Grzadkowski, M. Iskrzynski, M. Misiak and J. Rosiek,JHEP 1010, 085 (2010) Engel:2013lsaJ. Engel, M. J. Ramsey-Musolf and U. van Kolck,Prog. Part. Nucl. Phys.71, 21 (2013) Afach:2015sjaJ. M. Pendlebury et al.,Phys. Rev. D 92, no. 9, 092003 (2015)NSAC2015 2015 Nuclear Science Advisory Committee Long Range Plan “Reaching for the Horizon”, <https://science.energy.gov/ /media/np/nsac/pdf/2015LRP/2015_LRPNS_091815.pdf>. Kumar:2013qyaK. Kumar, Z. T. Lu and M. J. Ramsey-Musolf,arXiv:1312.5416 [hep-ph]. Graner:2016sesB. Graner, Y. Chen, E. G. Lindahl and B. R. Heckel,Phys. Rev. Lett.116, no. 16, 161601 (2016) KF_MJRMK. Fuyuto and M. J. Ramsey-Musolf, in preparation. Knecht:2001qgM. Knecht, A. Nyffeler, M. Perrottet and E. de Rafael,Phys. Rev. Lett.88, 071802 (2002) RamseyMusolf:2002cyM. J. Ramsey-Musolf and M. B. Wise,Phys. Rev. Lett.89, 041601 (2002) Hisano:2012ccJ. Hisano, K. Tsumura and M. J. S. Yang,Phys. Lett. B 713, 473 (2012) RamseyMusolf:1999nkM. J. Ramsey-Musolf,Phys. Rev. Lett.83, 3997 (1999) Erratum: [Phys. Rev. Lett.84, 5681 (2000)] Shifman:1976deM. A. Shifman, A. I. Vainshtein and V. I. Zakharov,Phys. Rev. D 18, 2583 (1978) Erratum: [Phys. Rev. D 19, 2815 (1979)].Ciuchini:1993fkM. Ciuchini, E. Franco, L. Reina and L. Silvestrini,Nucl. Phys. B 421, 41 (1994)Degrassi:2005zdG. Degrassi, E. Franco, S. Marchetti and L. Silvestrini,JHEP 0511, 044 (2005) Dekens:2013zcaW. Dekens and J. de Vries,JHEP 1305, 149 (2013)Fuyuto:2013glaK. Fuyuto, J. Hisano, N. Nagata and K. Tsumura,JHEP 1312, 010 (2013) Bhattacharya:2015esaT. Bhattacharya, V. Cirigliano, R. Gupta, H. W. Lin and B. Yoon,Phys. Rev. Lett.115, no. 21, 212002 (2015) Bhattacharya:2015wnaT. Bhattacharya et al. [PNDME Collaboration],Phys. Rev. D 92, no. 9, 094511 (2015) Ibrahim:1997gjT. Ibrahim and P. Nath,Phys. Rev. D 57, 478 (1998) Erratum: [Phys. Rev. D 58, 019901 (1998)] Erratum: [Phys. Rev. D 60, 079903 (1999)] Erratum: [Phys. Rev. D 60, 119901 (1999)]Ibrahim:1998jeT. Ibrahim and P. Nath,Phys. Rev. D 58, 111301 (1998) Erratum: [Phys. Rev. D 60, 099902 (1999)] Falk:1998puT. Falk and K. A. Olive,Phys. Lett. B 439, 71 (1998) Brhlik:1998znM. Brhlik, G. J. Good and G. L. Kane,Phys. Rev. D 59, 115004 (1999)
http://arxiv.org/abs/1706.08548v1
{ "authors": [ "Kaori Fuyuto", "Michael Ramsey-Musolf" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170626180639", "title": "Top Down Electroweak Dipole Operators" }
Max Planck Institute for Chemical Physics of Solids, 01187 Dresden, GermanyMax Planck Institute for Chemical Physics of Solids, 01187 Dresden, Germany Physik-Department, Technische Universität München, 85748 Garching, GermanyMax Planck Institute for Chemical Physics of Solids, 01187 Dresden, GermanyMax Planck Institute for Chemical Physics of Solids, 01187 Dresden, GermanyMax Planck Institute for Chemical Physics of Solids, 01187 Dresden, Germany Scottish Universities Physics Alliance, School of Physics and Astronomy, University of St. Andrews, St. Andrews, Fife KY16 9SS, UKScottish Universities Physics Alliance, School of Physics and Astronomy, University of St. Andrews, St. Andrews, Fife KY16 9SS, UKScottish Universities Physics Alliance, School of Physics and Astronomy, University of St. Andrews, St. Andrews, Fife KY16 9SS, UKcorrespondence should be addressed to [email protected] and [email protected] Max Planck Institute for Chemical Physics of Solids, 01187 Dresden, Germany Scottish Universities Physics Alliance, School of Physics and Astronomy, University of St. Andrews, St. Andrews, Fife KY16 9SS, UKcorrespondence should be addressed to [email protected] and [email protected] Max Planck Institute for Chemical Physics of Solids, 01187 Dresden, Germany Physik-Department, Technische Universität München, 85748 Garching, Germany 71.27.+a,71.18.+y,71.15.MbWe report on a combined study of the de Haas-van Alphen effect and angle resolved photoemission spectroscopy on single crystals of the metallic delafossite PdRhO_2 rounded off by ab initio band structure calculations. A high sensitivity torque magnetometry setup with SQUID readout and synchrotron-based photoemission with a light spot size of 50 μm enabled high resolution data to be obtained from samples as small as 150×100×20 (μm)^3. The Fermi surface shape is nearly cylindrical with a rounded hexagonal cross section enclosing a Luttinger volume of 1.00(1) electrons per formula unit. Quasi two-dimensional Fermi surface topography of the delafossite PdRhO_2 E. Hassinger December 30, 2023 =========================================================================In recent years delafossite layered metallic oxides <cit.> have attracted considerable attention because of their extremely high electrical conductivity and the simplicity of their electronic structure <cit.>. The delafossite structure of general formula ABO_2 features alternating triangularly co-ordinated A metal layers separated by BO_2 layers in which B is a transition metal in a trigonally distorted octahedral co-ordination with oxygen <cit.>. The layer stacking sequence results in there being three formula units per hexagonal unit cell, with the space-group R3m. Many delafossites are semiconducting or insulating, but those with A site metals Pd or Pt are highly anisotropic metals in which conductivity in the layers is hundreds of times larger that that perpendicular to them. Even at room temperature, the in-plane resistivities of non-magnetic PtCoO_2 and PdCoO_2 are just over 2 μΩcm <cit.>, lower than that of any elemental metal except Ag and Cu. Taking into account the factor of three lower carrier density in the delafossites, they have a room temperature mean free path at least a factor of two longer than even that of pure Ag. The resistivity falls rapidly with temperature, and resistive mean free paths of over 20 μm have been observed in PdCoO_2 <cit.>.The Fermi surface of the known delafossite metals is extremely simple.In non-magnetic PdCoO_2 and PtCoO_2, it is a single, weakly corrugated cylinder with nearly hexagonal cross-section <cit.>. In PdCrO_2, a similar cylinder is observed above 40 K, but at low temperatures very small gapping is detected, due to coupling between spin ordering in the CrO_2 layers and the states in the broad conduction band whose dominant character is Pd 4d/5s-like <cit.>. Electron counting in PdCrO_2 highlights the role of correlations in the transition metal layer of the delafossites: the CrO_2 layer is Mott insulating <cit.>.The knowledge to date of the delafossite metals therefore points to an interesting and very unusual situation in which there is a close interplay between an extremely broad conduction band with a Fermi velocity of order 8×10^5 ms^-1 (close to the free electron value) and 3d transition metal states for which correlations are known to be strong.The situation is made even richer by the fact that the weakly- and strongly-correlated states arise from different layers in the crystal structure. Delafossites are like a naturally-occurring example of the kind of heterostructures that many groups world-wide are trying to synthesize artificially, and a natural structural class on which to base future layer-by-layer synthesis.The unique combination of properties highlighted above has already led to the observation of fascinating physics, notably the observation of huge c-axis magnetoresistance oscillations <cit.>, the unconventional Hall effect <cit.>, and hydrodynamic electron flow <cit.>, and it seems likely that new regimes of mesoscopic transport will be attainable via focused ion beam microstructuring of single crystals. All of these phenomena are expected to be strongly sensitive to the details of the Fermi surface shape, i.e. the curvature of the in-plane hexagon, as well as the out-of-plane warping. To unlock the full potential of the delafossite oxides and to yield new physics, it is crucial to have access to slightly different Fermi surface topographies and different levels of correlation in the ABO_2 layers, while preserving the overall simplicity of the electronic structure. There is a pressing need, therefore, to have as many such metals available for precision study as possible.So far, the only monovalent delafossite metals for which single crystals exist are PdCoO_2, PdCrO_2 and PtCoO_2 in which the B-site cations are 3d transition metals <cit.>. Based on preliminary studies on powders and polycrystalline thin films, as well as electronic structure calculations <cit.>, PdRhO_2 is thought to be metallic and also to have a single conduction band. Hence this material offers the opportunity to study the effect of varying Pd-Pd overlap integrals, as well as the effect of changing on-site correlation and spin-orbit coupling strengths by moving to a 4d B site transition metal.Recently, we have succeeded in crystallizing PdRhO_2 <cit.>. Here, we report a comprehensive study of de Haas-van Alphen (dHvA) measurements on this new material, and combine the dHvA data with information from angle resolved photoemission spectroscopy (ARPES) to determine the Fermi surface with high precision. We also highlight the potential of PdRhO_2 to test and refine the accuracy of modern many-body electronic structure calculations. Crystal growth and characterization of single crystals of PdRhO_2 is described in <cit.>. De Haas-van Alphen oscillations of two PdRhO_2 crystals from the same growth batch were observed at temperatures between 100 mK and 4 K in magnetic fields up to 15 T. The respective sample sizes were approximately 200×300×50 (μm)^3 and 150×100×20 (μm)^3. Experiments were performed using an ultra-low noise SQUID torque magnetometer, installed on a MX400 Oxford Instruments dilution refrigerator with a 15/17 T superconducting magnet and 270^o Swedish rotator with an angular accuracy of Δθ=±0.2^∘. The magnetometer utilizes piezoresistive PRC400 micro-cantilevers and a two-stage dc-SQUID as highly sensitive read-out, offering an unprecedented torque resolution of Δτ=2×10^-13 Nm at lowest temperatures <cit.>. Data were taken at constant temperatures whilst the magnetic field was swept from 15 to 7.5 T at a rate of 30 mT/min. ARPES was performed using the I05 beamline of Diamond Light Source, UK. Samples were cleaved in-situ at the measurement temperature of 13 K, and probed using linear horizontal polarisation light with a photon energy of 110 eV and spot size of ≈50 μm. As well as the bulk Fermi surface extracted here, surface states indicative of a RhO_2-termination were also observed in the experiment <cit.>.Relativistic density functional (DFT) electronic structure calculations including spin-orbit coupling were performed using the full-potential FPLO code <cit.>, version fplo14.00-47 within the general gradient approximation (GGA). Coulomb repulsion in the Rh-4d shell was simulated in a mean field way applying the GGA+U approximation in the atomic-limit-flavor <cit.>. The calculated and ARPES-measured Fermi surfaces of PdRhO_2 are compared in Fig. <ref>. The ARPES measurements yield a Luttinger count of 0.94(4) electrons per formula unit. Similar to ARPES measurements of other metallic delafossites <cit.>, this is slightly smaller than the half-filled band expected from electron counting, which is likely due to a small shift of the Fermi level arising from a polar surface charge.Nonetheless, apart from some small distortions related to details of the experiment <cit.>, the measured Fermi surface is in good agreement with the projection of that calculated from density-functional theory on to the two-dimensional Brillouin zone, if they are scaled to the same total area. The calculations indicate a highly two-dimensional Fermi surface, entirely consistent with sharp spectral line widths observed in the ARPES which rule out significant k_z dispersion. These therefore show that the interplane dispersion in PdRhO_2 is extremely small; the de Haas-van Alphen effect is one of the few experimental probes capable of resolving the resulting k_z dependent features in the Fermi surface <cit.>.In Figure <ref> we show background subtracted magnetic torque data for a selection of magnetic field angles θ with respect to the crystallographic c-axis within the ZΓL-plane. Strong quantum oscillations are visible for all magnetic field angles (see Fig. <ref>a) and b)).The 1/cos(θ) angular dependence of the two quantum oscillation frequencies (dashed line in Fig. <ref>c) evidences the quasi-two-dimensional Fermi surface topography, while the beating of the envelope function is the first indication of out-of-plane dispersion. The lower and higher frequencies, labeled F_β and F_α, correspond to minimal and maximal extremal orbits respectively. These are also evident in the Fourier transforms of Fig. <ref>c), which were taken over a magnetic field interval from 7.5 to 15 T. For better accuracy the frequency splitting close to the Yamaji angles was derived from the beating envelopes. For Bc, the mean quantum oscillation frequency F_0=(F_α(0)+F_β(0))/2=26.25 kT is equivalent to a Fermi surface cross section of A=2.505Å^-2. Considering the room-temperature lattice constants <cit.> this corresponds to 50.3 % filling of the first Brillouin zone (4.9814Å^-2) and Luttinger count of 1.006(10), where the error estimate is dominated by the likely effects of thermal contraction. Thus, in agreement with ab-initio band structure calculations, the electronic structure of PdRhO_2 is described by a single half-filled band with 1.00 charge carriers per formula unit.The effective cyclotron mass, Dingle temperature and mean free path were determined for fields close to the c-axis.Details of the analysis are given in <cit.>. The key results are the masses m_α=1.43(5) m_0 and m_β=1.63(5) m_0 and the mean free path is 225(30) nm.In order to analyze the Fermi surface topography further, we now turn to the angular dependence of the observed frequency splitting. The quantum oscillation frequencies for magnetic fields within the crystallographic ZΓK and ZΓL-planes corrected by cos(θ) are shown in Fig. <ref>. Only the frequency splitting around the mean frequency F is shown, as the angular inaccuracy of our rotator leads to sizable frequency offsets especially at larger angles. For the raw data and detailed analysis of the angular uncertainty see <cit.>. The Fermi surface warping, i.e. azimuthal and height dependence of k_F, can be parametrised in cylindrical harmonics:k_F=∑_μ,ν≥0k_μ,νcos(νκ)cos(μϕ),where κ=c^*k_z is the reduced z-coordinate and ϕ the azimuthal angle <cit.>. Note that c^*=6.034 Å is the interlayer spacing, which is a third of the c-axis lattice constant. Due to the hexagonal lattice symmetry and R3m(D^5_3d) space group, k_μ,ν are limited to (μ,ν) ∈{(0,0);(0,1);(0,2);(0,3);(3,1);(6,0);(12,0)} and higher order terms. By fitting to the frequencies shown in Fig. <ref>, as described in detail in <cit.>, we are able to determine k_0,0 and all relevant k_μν with ν≥1. The in-plane parameters k_6,0 and k_12,0 were obtained from the Fermi surface shape of Fig. <ref> <cit.>. The respective parameters and Fermi surface topology are summarized in Tab. <ref>. Knowledge of the warping parameters of PdRhO_2 and a comparison with those previously deduced for its sister compound PdCoO_2 <cit.> yields considerable insight into interplane hopping and coherence in the metallic delafossites. In both materials the dominant interplane terms are k_0,1, qualitatively corresponding to direct Pd-Pd hopping along the c-axis, and k_3,1, which results from hopping via the Co or Rh layers. In going from Co to Rh, several effects are expected to compete. Rh is larger, with more extended 4d orbitals, so its presence increases the in-plane a and interplane c lattice parameters, by approximately 7 % and 2 % respectively. This lattice expansion would be expected to lead to less effective c-axis Pd-Pd hopping, consistent with the observation that k_0,1 is a factor of 2.7 smaller in PdRhO_2 than in PdCoO_2. For hopping via the Co/Rh layer the situation is more subtle. If correlations in that layer are ignored, an LDA calculation predicts a much larger k_3,1 term in PdCoO_2 than is actually observed. However, if some account is taken of that correlation by assuming a realistic on-site repulsion energy U of several eV <cit.>, the hybridization with the conduction band is strongly suppressed, reducing the calculated value to close to the experimental one of k_3,1=0.001. Qualitatively, the lattice parameter expansion caused by moving from Co to Rh, which naively would be expected to reduce k_3,1, is more than offset by the reduction in U for the 4d states of Rh and an increase in Pd-Rh overlap. The result is a slightly larger value of k_3,1=0.006. Overall, the Fermi surface of PdRhO_2 is extremely anisotropic, and the most two-dimensional of any metallic delafossite. Under the assumption of a single scattering time τ, the k_μ,ν harmonics can be used to estimate the resistive anisotropy. For a single band metallic delafossite with an assumed circular Fermi surface (the hexagonal cross-section of Fig. <ref> only alters this estimate by a few per cent) the relevant expression is ρ_ab/ρ_c=d^2/2∑_ν,μ>0ν^2 k_μ,ν^2 (1+δ_μ0)where d is the interlayer spacing and δ the Kronecker delta function. Since k_0,1 contributes more strongly to this sum than k_3,1, PdRhO_2 is predicted to have a larger anisotropy (≈1300) than PdCoO_2. Preliminary transport data <cit.> are consistent with this prediction, though a more careful transport study with a range of sample sizes is desirable. The larger size of Rh also affects the in-plane Pd-Pd overlaps and reduces k_6,0, k_12,0 and the Fermi velocity v_F. Using k_0,0 and the measured masses leads to a Brillouin zone averaged Fermi velocity v_F=ħ k_0,0/m^* = 6.8×10^5 ms^-1. This is smaller than that of PdCoO_2 by approximately 10 %, consistent with the a lattice parameter being 7 % larger in PdRhO_2.Although it is possible to qualitatively account for the trends of the warping harmonics and Fermi velocity on going from PdCoO_2 to PdRhO_2, the resolution of the data that we have presented provides a considerable opportunity to refine the quality of electronic structure calculations. Despite the lower correlation energies for 4d Rh and Pd than for 3d transition metals, correlation still plays an important role in determining the details of the observed Fermi surface, and in tuning the degree of interlayer hopping. Knowing the experimental warpings at 0.1 % resolution presents a considerable challenge to "ab initio plus correlation" theoretical approaches. It will be intriguing to see if any are capable of accounting for the values that we report for k_6,0, k_12,0, k_0,1, k_0,2, k_3,1 and v_F. Although this seems a difficult task, PdRhO_2 will be an ideal material on which to benchmark the progress of the field. Preliminary attempts to add a single U on the Rh site were not successful in matching all the parameters simultaneously; refinement at the level of individual Wannier functions is likely to be necessary.A further property of note is the extremely high overall anisotropy of the measured Fermi surface. If sufficiently high anisotropies can be obtained in very clean materials like the metallic delafossites, it is possible that at high magnetic fields a limit could be reached in which all electrons are restricted to a single Landau level of very high index. Hence the physics of singly occupied Landau levels, long thought to be restricted to low density electron gases, might be observable at full metallic electron densities. Although the total bandwidth along k_z in as-grown PdRhO_2 is very small, it is still 40 meV, implying that a field of nearly 500 T would be required to reach this limit. However, this observation provides motivation to try to produce a still more anisotropic material, perhaps using uniaxial pressure in PdRhO_2 or by growing crystals of the next compound in the series, PdIrO_2. This latter material is also of considerable interest as a candidate triangular lattice superconductor.In summary, we have successfully established the Fermi surface topography of the metallic delafossite PdRhO_2, using a combination of angle-resolved photoemission spectroscopy and high resolution torque magnetometry studies of the de Haas-van Alphen effect. Our results establish it as a benchmark material for the study of high purity quasi-two dimensional metals, and for the development of high precision electronic structure calculations.§ ACKNOWLEDGMENTS The authors would like to thank the Diamond Light Source for access to Beamline I05 via Proposal No. SI14927 as well as L. Bawden, T.K. Kim, and M. Hoesch for their technical support. In addition we would like to acknowledge the financial support from the European Research Council (through the QUESTDO project), the Engineering and Physical Sciences Research Council UK (Grant No. EP/I031014/1 and EP/L015110/1), the Royal Society and the Max-Planck Society.31 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Shannon et al.(1971)Shannon, Rogers, and Prewitt]Shannon71 author author R. D. Shannon, author D. B. Rogers,and author C. T. Prewitt,10.1021/ic50098a011 journal journal Inorg. Chem. volume 10, pages 713 (year 1971)NoStop [Mackenzie(2017)]Mackenzie17 author author A. P. Mackenzie, 10.1088/1361-6633/aa50e5 journal journal Rep. Prog. Phys. volume 80, pages 032501 (year 2017)NoStop [Prewitt et al.(1971)Prewitt, Shannon, and Rogers]Prewitt71 author author C. T. Prewitt, author R. D. Shannon,and author D. B. Rogers, 10.1021/ic50098a012 journal journal Inorg. Chem. volume 10,pages 719 (year 1971)NoStop [Hicks et al.(2012)Hicks, Gibbs, Mackenzie, Takatsu, Maeno, and Yelland]Hicks12 author author C. W. Hicks, author A. S. Gibbs, author A. P. Mackenzie, author H. Takatsu, author Y. Maeno,and author E. A. Yelland, 10.1103/PhysRevLett.109.116401 journal journal Phys. Rev. Lett. volume 109, pages 116401 (year 2012)NoStop [Kushwaha et al.(2015)Kushwaha, Sunko, Moll, Bawden, Riley, Nandi, Rosner, Schmidt, Arnold, Hassinger, Kim, Hoesch, Mackenzie, and King]Kushwaha15 author author P. Kushwaha, author V. Sunko, author P. J. W. Moll, author L. Bawden, author J. M. Riley, author N. Nandi, author H. Rosner, author M. P.Schmidt, author F. Arnold, author E. Hassinger, author T. K. Kim, author M. Hoesch, author A. P. Mackenzie,and author P. D. C. King, 10.1126/sciadv.1500692 journal journal Sci. Adv. volume 1, pages e1500692 (year 2015)NoStop [Eyert et al.(2008)Eyert, Frésard, and Maignan]Eyert08 author author V. Eyert, author R. Frésard, and author A. Maignan, 10.1021/cm703404e journal journal Chem. Mater. volume 20, pages 2370 (year 2008)NoStop [Kim et al.(2009)Kim, Choi, and Min]Kim09 author author K. Kim, author H. C. Choi, and author B. I. Min, 10.1103/PhysRevB.80.035116 journal journal Phys. Rev. B volume 80, pages 035116 (year 2009)NoStop [Ong et al.(2010)Ong, Singh, and Wu]Ong10 author author K. P. Ong, author D. J. Singh, and author P. Wu, 10.1103/PhysRevLett.104.176601 journal journal Phys. Rev. Lett. volume 104, pages 176601 (year 2010)NoStop [Noh et al.(2009)Noh, Jeong, Jeong, Cho, Kim, Kim, Min, andKim]Noh09 author author H.-J. Noh, author J. Jeong, author J. Jeong, author E.-J. Cho, author S. B. Kim, author K. Kim, author B. I.Min,and author H.-D.Kim, 10.1103/PhysRevLett.102.256404 journal journal Phys. Rev. Lett. volume 102, pages 256404 (year 2009)NoStop [Sobota et al.(2013)Sobota, Kim, Takatsu, Hashimoto, Mo, Hussain, Oguchi, Shishidou, Maeno, Min, andShen]Sobota13 author author J. A. Sobota, author K. Kim, author H. Takatsu, author M. Hashimoto, author S.-K. Mo, author Z. Hussain, author T. Oguchi, author T. Shishidou, author Y. Maeno, author B. I.Min,and author Z.-X.Shen, 10.1103/PhysRevB.88.125109 journal journal Phys. Rev. B volume 88, pages 125109 (year 2013)NoStop [Ok et al.(2013)Ok, Jo, Kim, Shishidou, Choi, Noh, Oguchi, Min, and Kim]Ok13 author author J. M. Ok, author Y. J. Jo, author K. Kim, author T. Shishidou, author E. S. Choi, author H.-J. Noh, author T. Oguchi, author B. I.Min,and author J. S.Kim, 10.1103/PhysRevLett.111.176405 journal journal Phys. Rev. Lett. volume 111, pages 176405 (year 2013)NoStop [Noh et al.(2014)Noh, Jeong, Chang, Jeong, Moon, Cho, Ok, Kim, Kim, Min, Lee, Kim, Park, Kim, andLee]Noh14 author author H.-J. Noh, author J. Jeong, author B. Chang, author D. Jeong, author H. S. Moon, author E.-J.Cho, author J. M. Ok, author J. S. Kim, author K. Kim, author B. I. Min, author H.-K. Lee, author J.-Y.Kim, author B.-G. Park, author H.-D. Kim, and author S. Lee, 10.1038/srep03680 journal journal Sci. Rep. volume 4, pages 3680 (year 2014)NoStop [Hicks et al.(2015)Hicks, Gibbs, Zhao, P.Kushwaha, Borrmann, Mackenzie, Takatsu, Yonezawa, Maeno, and Yelland]Hicks15 author author C. W. Hicks, author A. S. Gibbs, author L. Zhao, author P.Kushwaha, author H. Borrmann, author A. P.Mackenzie, author H. Takatsu, author S. Yonezawa, author Y. Maeno,and author E. A. Yelland, 10.1103/PhysRevB.92.014425 journal journal Phys. Rev. B volume 92, pages 014425 (year 2015)NoStop [Takatsu et al.(2013)Takatsu, Ishikawa, Yonezawa, Yoshino, Shishidou, Oguchi, Murata, and Maeno]Takatsu13 author author H. Takatsu, author J. J. Ishikawa, author S. Yonezawa, author H. Yoshino, author T. Shishidou, author T. Oguchi, author K. Murata,and author Y. Maeno, 10.1103/PhysRevLett.111.056601 journal journal Phys. Rev. Lett. volume 111, pages 056601 (year 2013)NoStop [Kikugawa et al.(2016)Kikugawa, Goswami, Kiswandhi, Choi, Graf, Baumbach, Brooks, Sugii, Iida, Nishio, Uji, Terashima, Rourke, Hussey, Takatsu, Yonezawa, Maeno, and Balicas]Kikugawa16 author author N. Kikugawa, author P. Goswami, author A. Kiswandhi, author E. S. Choi, author D. Graf, author R. E. Baumbach, author J. S.Brooks, author K. Sugii, author Y. Iida, author M. Nishio, author S. Uji, author T. Terashima, author P. M. C. Rourke, author N. E. Hussey, author H. Takatsu, author S. Yonezawa, author Y. Maeno,and author L. Balicas, 10.1038/ncomms10903 journal journal Nature Commun. volume 7, pages 10903 (year 2016)NoStop [Takatsu et al.(2010)Takatsu, Yonezawa, Fujimoto, andMaeno]Takatsu10 author author H. Takatsu, author S. Yonezawa, author S. Fujimoto,andauthor Y. Maeno, 10.1103/PhysRevLett.105.137201 journal journal Phys. Rev. Lett. volume 105, pages 137201 (year 2010)NoStop [Moll et al.(2016)Moll, Kushwaha, Nandi, Schmidt,and Mackenzie]Moll16 author author P. J. W.Moll, author P. Kushwaha, author N. Nandi, author B. Schmidt,andauthor A. P. Mackenzie, 10.1126/science.aac8385 journal journal Science volume 351, pages 1061 (year 2016)NoStop [Col()]Coldea09 @noopnote AgNiO_2 is also metallic, but as the result of charge order in the B site Ni layer, is distinct from the monovalent metals discussed here. See A. I. Coldea, A. Carrington, R. Coldea, L. Malone, A. F. Bangura, M. D. Johannes, I. I. Mazin, E. A. Yelland, J. G. Analytis, J. A. A. J. Perenboom, C. Jaudet, D. Viognolles, T. Sorgel and M. Jansen, ArXiv Preprint <http://arxiv.org/abs/0908.4169> (2009)NoStop [Baird et al.(1988)Baird, Graham, and Weber]Baird88 author author R. J. Baird, author G. W. Graham, and author W. H. Weber, 10.1007/BF00666844 journal journal Oxid. Met. volume 29, pages 435 (year 1988)NoStop [Carcia et al.(1980)Carcia, Shannon, Bierstedt, and Flippen]Carcia80 author author P. F. Carcia, author R. D. Shannon, author P. E. Bierstedt,andauthor R. B. Flippen, 10.1149/1.2130047 journal journal J. Electrochem. Soc. volume 127, pages 1974 (year 1980)NoStop [Kim et al.(2014)Kim, Kim, and Min]Kim14 author author K. Kim, author J. Kim,andauthor B. I. Min, 10.7566/JPSJ.83.124708 journal journal J. Phys. Soc. Jpn. volume 83, pages 124708 (year 2014)NoStop [Kushwaha et al.()Kushwaha, Borrmann, Khim, Rosner, Moll, Sokolov, Sunko, Grin, and Mackenzie]Kushwaha17 author author P. Kushwaha, author H. Borrmann, author S. Khim, author H. Rosner, author P. J. W. Moll, author D. A.Sokolov, author V. Sunko, author Y. Grin,andauthor A. P. Mackenzie,@noop http://arxiv.org/abs/1706.07614 arXiv:1706.07614 [cond-mat] NoStop [SOM()]SOM journal @noopnote Supplemental Material at http:// ...Stop [Arnold et al.()Arnold, Naumann, Lühmann, Mackenzie, and Hassinger]Arnold17RSI author author F. Arnold, author M. Naumann, author T. Lühmann, author A. P. Mackenzie,and author E. Hassinger, @noop http://arxiv.org/abs/1706.08350 arXiv:1706.08350 [physics.ins-det] NoStop [Rossel et al.(1996)Rossel, Bauer, Zech, Hofer, Willemin, and Keller]Rossel96 journal author author C. Rossel, author P. Bauer, author D. Zech, author J. Hofer, author M. Willemin, and author H. Keller, 10.1063/1.362550 journal journal J. Appl. Phys. volume 79, pages 8166 (year 1996)NoStop [Sunko et al.()Sunko, Rosner, Mackenzie, and et al.]Sunko17 author author V. Sunko, author H. Rosner, author A. P. Mackenzie,andauthor P. D. C. K. et al., @noopjournal to be published NoStop [Koepernik and Eschrig(1999)]Koepernik99 journal author author K. Koepernik and author H. Eschrig, 10.1103/PhysRevB.59.1743 journal journal Phys. Rev. B volume 59, pages 1743 (year 1999)NoStop [Opahle et al.(1999)Opahle, Koepernik, and Eschrig]Opahle99 author author I. Opahle, author K. Koepernik, and author H. Eschrig, 10.1103/PhysRevB.60.14035 journal journal Phys. Rev. B volume 60, pages 14035 (year 1999)NoStop [FPL()]FPLO @noopnote <www.fplo.de>NoStop [Bergemann et al.(2003)Bergemann, Mackenzie, Julian, Foresythe, and Ohmichi]Bergemann03 author author C. Bergemann, author A. P. Mackenzie, author S. R. Julian, author D. Foresythe, and author E. Ohmichi, 10.1080/00018730310001621737 journal journal Adv. Phys. volume 52, pages 639 (year 2003)NoStop [Bergemann et al.(2000)Bergemann, Julian, Mackenzie, NishiZaki, and Maeno]Bergemann00 author author C. Bergemann, author S. R. Julian, author A. P. Mackenzie, author S. NishiZaki,and author Y. Maeno, 10.1103/PhysRevLett.84.2662 journal journal Phys. Rev. Lett. volume 84, pages 2662 (year 2000)NoStop § SUPPLEMENTARY ONLINE MATERIAL - QUASI TWO-DIMENSIONAL FERMI SURFACE TOPOGRAPHY OF THE DELAFOSSITE PDRHO_2§.§ 1. Sample Preparation and Characterization §.§.§ 1.1 Samples Figure <ref> shows the two samples whose quantum oscillations were studied in this article. Both samples were mounted on PRC400 piezo-electric micro-cantilevers <cit.> with Apiezon N-grease. Using grease allows us to reorient the sample on the cantilever, whilst it forms a solid bond at low temperatures. §.§.§ 1.2 Laue Diffraction Laue x-ray diffractograms (Fig. <ref>) of the large PdRhO_2 single crystal (200× 300 × 50 μm) in the orientiation shown in Fig. <ref>b) were taken to confirm its single crystallinity and orientation. For this diffraction measurement the sample was still mounted on the micro-cantilever and silver sample holder used in the rotation study of the dHvA oscillations. A collimator of 0.5 mm diameter was used, probing the entire sample at once.The diffractogram (Fig. <ref>) only shows higher order Bragg peaks of PdRhO_2. The [100] and [110] reflections (note that these are the hexagonal representation of the rhombohedral unit cell) are masked by the silver sample holder to the left and right of the image. However, the Bragg peaks show the single crystallinity and orientation of the sample. The rotational axes for the angular dependence of the dHvA frequencies are shown in the Laue pattern (Fig. <ref>) and sample photos (Fig. <ref>). Thus, in the orientation shown in Fig. <ref>b), the sample is rotated around the [110] direction corresponding to magnetic fields within the ZΓL plane. §.§ 2. Angle-Resolved Photo Emission Spectroscopy (ARPES) As stated in the main article, the in-plane warping parameters k_6,0 and k_12,0 of the PdRhO_2 Fermi surface were extracted from a slightly distorted Fermi surface contour.The measured ARPES Fermi surface displayed in Fig. 1 of the main text exhibits slight distortions from the expected 3-fold rotational symmetry within the surface Brillouin zone, which we attribute to the presence of slight surface inhomogeneity, small positioning errors from the center of rotation of the sample manipulator, or possibly small residual fields or local work-function variations introducing distortions on the outgoing electron trajectories. The data shown in Fig. 1 of the main text were used to establish the quoted k_6,0 and k_12,0, but numerically correcting the distortions makes only a tiny quantitative change to the extracted values.Fig. <ref> shows the Fermi surface as determined by ARPES and the corresponding azimuthal dependence of the Fermi wave vector. As can be seen the in-plane warping-parameters stay constant within the error bars when the distortion is numerically corrected. §.§ 3. de Haas-van Alphen (dHvA) §.§.§ 3.1 Effective Mass Analysis Cyclotron masses of both extremal orbits were determined from a temperature dependence of the quantum oscillation amplitude at a magnetic field angle of +5^∘ within the ZΓL-plane. Figure <ref> shows the measured quantum oscillation spectra for temperatures between 100 mK and 4.5 K.Due to the poor thermal conductance of the micro-cantilevers at millikelvin temperatures, sample temperatures below 200 mK were calculated according to <cit.> using the stabilized rotator temperature and excitation current of 10 μA.As can be seen the two dHvA frequencies are strongly suppressed with increasing temperature. Both temperature dependencies are well described by the Lifshitz-Kosevich temperature reduction term:R_T=x/sinh{x}with x=π^2m^*k_BT/μ_BB(see inset of Fig. <ref> of the main text) The corresponding cyclotron masses are m_β=(1.63±0.05) m_0 (26.06 kT) and m_α=(1.43±0.05) m_0 (26.53 kT). §.§.§ 3.2 Dingle Analysis Information about the mean free path and charge carrier scattering times can be drawn from the magnetic field dependence of the quantum oscillation amplitude. The magnetic field dependence is described by the Dingle term: R_D = exp{-π^2m^*k_BT_D/μ_BB},where T_D is the Dingle temperature, which is indirect proportional to the scattering time τ =ħ/(2π k_B T_D). As strong beating occurs in PdRhO_2, a direct fit of the Dingle term to the quantum oscillation envelope is subject to large errors. Thus, we pursue an alternative approach and determine the Dingle temperature from the line width of the quantum oscillation spectrum Fig. <ref>b). For this the torque data are corrected by √(B) to account for the intrinsic magnetic field dependence of the Lifshitz-Kosevich equation. The exponentially decaying envelope transforms in a Lorenzian line shape, whose full-width-half-maximum is Δ F = T_D π m^*k_B/μ_B <cit.>. From the Dingle spectrum Fig. <ref>, we extract Dingle temperatures of T^β_D=3.6 K and T^α_D=3.8 K and scattering times of τ^β = 3.4×10^-13 s and τ^α = 3.2×10^-13 s respectively. Taking into account the extremal Fermi surface cross sections and cyclotron masses, we obtain Fermi velocities of v^β_F=633 km/s and v^α_F=725 km/s (v_F=ħ k_F/m^*) and electron mean free path of l = τ× v_F≈ (225±10) nm.§.§.§ 3.3 Angular Dependence To determine the angular dependence of the quantum oscillation frequencies, magnetic torque measurements were performed between 7.5 T and 15 T in angular steps of 2.5^∘ for magnetic fields applied in the ZΓK and ZΓL plane respectively. After subtracting a B^2-background and filtering the magnetic torque data, fast Fourier transforms (FFTs) were taken as a function of 1/B. The resulting quantum oscillation spectra are shown in Fig. <ref>.Quantum oscillation frequencies were determined from the peak positions of the FFTs and given in Fig. <ref>. As shown in the main article, their main angular dependance F(θ) = F_0 / cos(θ) is given by the quasi two dimensional shape of the Fermi surface. Here, θ is the polar angle formed by the crystal c axis and the direction of the magnetic field. As we are interested in variations of the almost perfectly cylindrical shape, we correct for this angular dependence when determining the warping parameters.§.§.§ 3.4 Angular uncertainty The mounting procedure for the sample, micro-cantilever and sample holder to the rotator induces an angular uncertainty of up to ±2.5^∘ of the sample with respect to the magnetic field. Parallel alignment to the c-axis, i.e. θ_0, was determined by the most symmetric 1/cos{θ} angular dependence. Fig. <ref> depicts how a small variation of the θ_0 moves the quantum oscillation frequencies away from the cylindrical behavior especially for large angles.Additionally to that systematic angular shift, we will discuss the angular uncertainty of our Swedish rotator as possible source of the scattering of the mean quantum oscillation frequencies. Figures <ref>a) and b) show the raw quantum oscillation frequencies corrected by cos{θ}. As can be seen the mean of the individual frequency pairs scatter around a common value of approximately 26.25 kT. In order to quantify possible origins of this scatter, we calculate the actual angle of each frequency pair from its mean frequency and the theoretical F(0)/cos{θ}-angular dependence assuming F(0)=26.25 kT. A histogram of the discrepancy between the nominal and actual angle is shown in Figure <ref>.We find that the angular discrepancy follows a standard distribution with a standard deviation of approximately 0.2^∘, which is in agreement with the technical specifications of our Swedish rotator. Scatter, originating from an angular uncertainty of the order of 0.2^∘ results in an error of the cos(θ)-scaling factor of approximately 0.2 % at θ = 30^∘ and up to 0.6 % at θ = 60^∘. This effect can be seen in Fig. <ref>, where the scatter of the mean frequency is most severe at large angles. The quoted error of up to 0.6 % induces a shift of the mean frequency of ≤150 T. The induced change of the frequency splitting, however, is only ≤3 T, which is far beneath our experimental frequency resolution. Thus the change of the frequency splitting is negligible and in Fig. 3 of the main article, the frequency splitting Δ F ×cos{θ} is given as measured whereas the mean frequency is corrected to the expected frequency F_0 at this angle. This introduces an artificial symmetrization of the frequency splitting, by which some information about the angular dependence of the average frequency and therefore the component k_0,2 of the harmonic expansion is lost(see below). At larger angles, only the strongest quantum oscillation frequency is visible due to a poorer signal to noise ratio. Hence there is no information about the frequency splitting for these angles in Fig. 3 of the main article.§.§.§ 3.5 Cylindrical harmonics expansion The angular dependence of the quantum oscillation frequencies due to the warping described in Eqn. 1 of the main text can be described by a Bessel function expansion of the extremal cross section A:A(κ) = π k^2_00/cos(θ)+2π k_0,0/cos(θ) ∑_μ,ν k_μ,νJ_μ(νκ_Ftanθ)× cos(νκ)cos(μϕ),where μ and ν are the axial and azimuthal indices of the cylindrical harmonics. The corresponding azimuthal and polar angles are ϕ and θ. κ_F=ck_0,0/3 is the reduced planar Fermi wave vector and J_μ are the Bessel functions <cit.>. These cross sections are used to calculate the oscillatory part of the magnetization, i.e. the de Haas-van Alphen effect:M̃ =∫^2π_0sin(ħ A(κ)/eB)dκ.Fourier transforming the oscillatory magnetization in 1/B leads to a theoretical angular dependence of the quantum oscillation frequencies depending on k_μ,ν.k_0,0 is uniquely determined by the mean quantum oscillation frequency k_0,0=√(2eF_0/ħ) for Bc. By tuning the harmonic parameters k_0,1, k_0,2 and k_3,1, we achieved a good fit to the experimental angular dependence (dashed lines in Fig. 3 of the main text). Here, k_0,1 is mostly responsible for the frequency splitting around the c-axis, whereas k_3,1 determines the asymmetry between positive and negative field angles within the ZΓL-plane (Fig. 3b of the main text). The parameter k_0,2 results in an asymmetry of the angular dependence of the upper versus the lower frequency branch. By a comparison of the raw data in Fig. <ref> with the simulation, we estimate that k_0,2 < 0.0002.Note that we had to allow for a ≈ 2^∘ azimuthal misalignment of ZΓK rotation plane to account for the observed asymmetry between positive and negative polar angles in Fig. 3a of the main text). Otherwise (for perfect alignment) the quantum oscillation frequencies within the ZΓK plane are independent of k_3,1 and symmetric about θ=0.§.§.§ 3.6 Torque interaction Close to the Yamaji angles (±25^∘ for B∈ZΓK), we found a sudden halving of the quantum oscillation frequency and doubling of the quantum oscillation amplitude for our large sample (see Fig. <ref>). The observed critical fields are highly hysteretic and only weakly depend on the magnetic field sweep rate. However they diverge quickly away from the Yamaji angle.A closer study of this feature revealed that it is caused by magnetic torque interaction <cit.>. Here a large oscillatory magnetization, such as induced by the de Haas-van Alphen oscillations in our samples, causes a non-negligible deflection of the magnetic torque lever. This deflectionleads to a reduction of the effective applied magnetic field in quasi-two-dimensional materials and consequential extension of the quantum oscillation period. Due to the simultaneous crossing of multiple Landau levels through the Fermi edge in these systems, the resulting quantum oscillations are highly non-sinusoidal and appear at fractions of the original frequency.In our case, we observe saw-tooth like quantum oscillations at highest fields and close to the Yamaji angle in the bigger sample but sinusoidal quantum oscillations in the smaller sample. Where the former shows a halving of the quantum oscillation frequency and doubling of the amplitude and the latter does not. This corroborates the torque interaction, which is larger in the larger sample, as the origin of the frequency change and excludes an intrinsic origin for its observation.§.§.§ 3.7 Spin Zeros and g-factor Figure <ref> shows the angular dependence of the quantum oscillation amplitude, as determined by discrete summation of the Fourier transforms (Fig. <ref>) above 25 kT. Generally the quantum oscillation amplitude is described by a superposition of the sin{2θ} magnetic torque angular dependence of a 2D-electron gas, the Lifshitz-Kosevich reduction terms (temperature, Dingle and spin reduction term) <cit.> and in case of multiple extremal orbits, their interference <cit.>. Besides the strong peaking of the quantum oscillation amplitude at the Yamaji angles( B∈ZΓK: ±25^∘, ±46^∘; B∈ZΓL: +21^∘, +49^∘, -53^∘)(see also main article) and the spin reduction term, all other components follow a smooth angular dependence. Thus it is possible to evaluate the spin splitting i.e. the mean effective charge carrier moment or g-factor of a Fermi surface orbit by the observation of spin zeros. In Fig. <ref> we observe an unexplained suppression of the quantum oscillation amplitude at 13.5^∘±1.0^∘. Due to warping of the Fermi surface and presence of two extremal orbit the amplitude is not fully suppressed. Following the spin reduction term:R_S=cos{π g m^*(θ)/2}≡0 with: m^*(θ)=m^*/cos{θ}and taking into account the cyclotron masses of m^*_β=(1.63±0.03) m_0 and m^*_α=(1.43±0.03) m_0 (see also main article), this angle corresponds to a g-factor of g=(1.91±0.12). However, due to the periodicity of the spin reduction term, the determined g-factor is not unique and other solutions g∈{3.18;4.45;5.71;...} might be possible [Due to the large magnetic torque interaction experienced in our experiments, large amplitude oscillations are strongly damped, leading to a flattening of the angular dependence. In addition at large angles, which are necessary to distinguish between the higher order g-factors, the signal to noise is rather poor and the integral FFT amplitude suffers from a dominating noise floor.].§.§ 4. Density Functional Theory CalculationsRelativistic density functional (DFT) electronic structure calculations including spin-orbit coupling were performed on a 60× 60×60 k-mesh, 18941 points in the irreducible wedge of the Brillouin zone. The spin-orbit (SO) coupling was treated non-perturbatively solving the four component Kohn-Sham-Dirac equation <cit.>. For the exchange-correlation potential, within the general gradient approximation (GGA), the parametrization of Perdew-Burke-Ernzerhof <cit.> was chosen. To obtain the rather small deviations from a purely 2D Fermi surface accurately, a self adjusting k-mesh was used to calculate the Fermi vectors. Interpolating the potential for the dense k-mesh of the self consistent calculation (216.000 k-points), the k-mesh was refined around the Fermi level iteratively to 1/32 of the original spacing, thus effectively covering approximately 2000 k-points in each direction of the Brillouin zone. The dHvA frequencies were evaluated on an angular mesh of 1^∘ and and 200 "slices" of the Brillouin zone along the respective field direction.The calculated cross sections, compared with the experimental data, are shown in Fig. <ref>. The calculated averaged frequency F_0 was sightly adjusted by (0.3 kT) to match the experimental value of 26.25 kT, the deviation is likely caused by the difference in lattice parameters due to thermal expansion. Note that DFT calculations are based on the room temperature lattice parameters presented in <cit.>. Quantum oscillation and ARPES data, however, are taken at 100 mK and 13 K respectively. For the GGA calculation, we obtain a good qualitative agreement with the experimental data with respect to shape and asymmetry of the Fermi surface. The dispersion along the z-direction, however, exceeds the experimental value by approximately a factor of two. The calculated Fermi velocities are sligthly underestimated, the corresponding bare band masses are somewhat overestimated.Simulating the Coulomb correlation in the Rh-4d shell in a mean field way, applying the GGA + U scheme (U=2.7 eV, J=0.5 eV), the overall agreement with the experimental data is improved (see Fig. <ref>). For the applied value U=2.7 eV, the dispersion along z agrees well with the experiment. In contrast, the asymmetry of the calculated FS (see Fig. <ref> right panel - ZΓL) is underestimated by the GGA + U scheme. Compared with the pure GGA calculations, however, the calculated Fermi velocities and the corresponding bare band masses are significantly improved with respect to the experimental data.12 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Hit()]Hitachi @noopnote Hitachi High-Technologies Europe GmbH, Europark Fichtenhain A 12, 47807 Krefeld, Germany, <http://www.hht-eu.com>NoStop [Shoenberg(2009)]Shoenberg author author D. Shoenberg, @nooptitle Magnetic Oscillations in Metals (publisher Cambridge University Press, year 2009)NoStop [Lifshitz and Kosevich(1956)]Lifshitz56 author author I. Lifshitz and author A. Kosevich, http://www.jetp.ac.ru/cgi-bin/dn/e_005_06_1227.pdf journal journal JETP volume 2, pages 636 (year 1956)NoStop [Arnold et al.()Arnold, Naumann, Lühmann, Mackenzie, and Hassinger]Arnold17RSI author author F. Arnold, author M. Naumann, author T. Lühmann, author A. P. Mackenzie,and author E. Hassinger, @noop , http://arxiv.org/abs/1706.08350 arXiv:1706.08350 [physics.ins-det] NoStop [Dingle(1952)]Dingle52 journal author author R. B. Dingle, 10.1098/rspa.1952.0056 journal journal Proc. Roy. Soc. A volume 211, pages 517 (year 1952)NoStop [Arnold et al.(2017)Arnold, Isidori, Kampert, Yager, Eschrig, and Saunders]Arnold17ArXiv author author F. Arnold, author A. Isidori, author E. Kampert, author B. Yager, author M. Eschrig,and author J. Saunders, @noop(year 2017), http://arxiv.org/abs/1411.3323 arXiv:1411.3323 [cond.-mat.] NoStop[Yamaji(1989)]Yamaji89 author author K. Yamaji, 10.1143/JPSJ.58.1520 journal journal J. Phys. Soc. Jpn. volume 58, pages 1520 (year 1989)NoStop [Note1()]Note1 note Due to the large magnetic torque interaction experienced in our experiments, large amplitude oscillations are strongly damped, leading to a flattening of the angular dependence. In addition at large angles, which are necessary to distinguish between the higher order g-factors, the signal to noise is rather poor and the integral FFT amplitude suffers from a dominating noise floor.Stop [Eschrig et al.(2004)Eschrig, Richter, and Opahle]Eschrig author author H. Eschrig, author M. Richter, and author I. Opahle,@nooptitle Relativistic Solid State Calculations in: Relativistic Electronic Structure Theory, (Part II Applications), series Theoretical and Computational Chemistry, Vol. volume 13 (publisher Elsevier,year 2004) p. pages 723NoStop [Perdew et al.(1996)Perdew, Burke, and Ernzerhof]Perdew96 author author J. P. Perdew, author K. Burke, and author M. Ernzerhof, 10.1103/PhysRevLett.77.3865 journal journal Phys. Rev. Lett. volume 77, pages 3865 (year 1996)NoStop [Kushwaha et al.()Kushwaha, Borrmann, Khim, Rosner, Moll, Sokolov, Sunko, Grin, and Mackenzie]Kushwaha17 author author P. Kushwaha, author H. Borrmann, author S. Khim, author H. Rosner, author P. J. W. Moll, author D. A.Sokolov, author V. Sunko, author Y. Grin,andauthor A. P. Mackenzie,@noop http://arxiv.org/abs/1706.07614 arXiv:1706.07614 [cond-mat] NoStop
http://arxiv.org/abs/1706.08865v1
{ "authors": [ "Frank Arnold", "Marcel Naumann", "Seunghyun Khim", "Helge Rosner", "Veronika Sunko", "Federico Mazzola", "Philip D. C. King", "Andrew P. Mackenzie", "Elena Hassinger" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170627140357", "title": "Quasi two-dimensional Fermi surface topography of the delafossite PdRhO$_2$" }
Courant Institute of Mathematical Sciences, New York University, New York, New York, USA [email protected] Center for Atmosphere Ocean Science, Courant Institute of Mathematical Sciences, New York University, New York, New York, USA [email protected] The Koopman operator induced by a dynamical system is inherently linear and provides an alternate method of studying many properties of the system, including attractor reconstruction and forecasting. Koopman eigenfunctions represent the non-mixing component of the dynamics. They factor the dynamics, which can be chaotic, into quasiperiodic rotations on tori. Here, we describe a method through which these eigenfunctions can be obtained from a kernel integral operator, which also annihilates the continuous spectrum. We show that incorporating a large number of delay coordinates in constructing the kernel of that operator results, in the limit of infinitely many delays, in the creation of a map into the discrete spectrum subspace of the Koopman operator. This enables efficient approximation of Koopman eigenfunctions from high-dimensional data in systems with pure point or mixed spectra. § INTRODUCTION The tasks of dimension reduction and forecasting of time series are very common in physical and engineering sciences, where the time-series studied are often partial observations of a nonlinear dynamical system. A classical example of such time series is data collected from the Earth's climate system, where many of the active degrees of freedom are difficult to access via direct observations (e.g., subsurface ocean circulation). Moreover, the available observations typically mix together different physical processes operating on a wide range of spatial and temporal scales. For instance, in the climate system, the seasonal cycle and the El Niño Southern Oscillation (the latter evolving on interannual timescales) both have strong associated signals in sea surface temperature <cit.>. In such applications, identifying dynamically important, coherent patterns of variability from the data can enhance our scientific understanding and predictive capabilities of complex phenomena.Ergodic theory, and in particular its operator-theoretic formulation <cit.>, provides a natural framework to address these objectives. In this framework, the focus is on the action of the dynamical system on spaces of observables (functions of the state), as opposed to the dynamical flow itself. The advantage of this approach, first realized in the seminal work of Koopman <cit.>, is that the action of a general dynamical system on spaces of observables is always linear. As a result, with appropriate regularity assumptions, the problem of identification and prediction of dynamically intrinsic coherent patterns can be formulated as an estimation problem for the spectrum of a linear evolution operator. In addition, for systems exhibiting ergodic behavior, spectral quantities such as eigenvalues and eigenfunctions can be statistically estimated from time-ordered data without prior knowledge of the state space geometry or the equations of motion. At the same time, spaces of observables are also infinite dimensional, so the issue of finite-dimensional approximation of (potentially unbounded) operators becomes relevant. Starting from the techniques proposed in <cit.>, the operator-theoretic approach to ergodic theory has stimulated the development of a broad range of techniques for data-driven modeling of dynamical systems. These methods employ either the Koopman <cit.> or the Perron-Frobenius (transfer) operators <cit.>, which are duals to one another in appropriate function spaces. The goal common to these techniques is to approximate spectral quantities for the operator in question, such as eigenvalues, eigenfunctions, and spectral projections, from measured values of observables along orbits of the dynamics. To that end, a diverse range of approaches has been employed, including state space partitions <cit.>, harmonic averaging <cit.>, iterative methods <cit.>, dictionary/basis representations <cit.>, delay-coordinate embeddings <cit.>, and spectral-moment estimation <cit.>. Compared to observables identified by eigendecomposition techniques based on kernel integral operators that do not depend on the dynamics (e.g., covariance <cit.> or heat operators <cit.>, the latter of which have been popular in manifold learning applications), eigenfunctions of evolution operators are likely to offer higher physical interpretability and predictability, as they are determined from an operator intrinsic to the dynamical system. In particular, one of the key properties of Koopman or Perron-Frobenius eigenfunctions for ergodic dynamical systems is that they evolve periodically and with a single frequency (even if the underlying dynamical system is aperiodic), and thus have high predictability. This and a number of other attractive properties motivate the identification of such eigenfunctions of data. Yet, for systems of sufficient complexity, Koopman and Perron-Frobenius operators have significantly more complicated spectral behavior than kernel integral operators, generally exhibiting a continuous spectral component and/or non-isolated eigenvalues, which presents challenges to the construction of data-driven approximation techniques with spectral convergence guarantees. Indeed, to our knowledge, spectral convergence results for the data-driven approximation of Koopman eigenvalues and eigenfunctions have been limited to special cases such as quasiperiodic rotations on tori <cit.>, or systems observed through measurement functions lying in finite-dimensional invariant subspaces <cit.>. The main contribution of our work is the construction of a data-driven approximation scheme for Koopman eigenvalues and eigenfunctions that provably converges for a broad class of ergodic dynamical systems and observation maps, encompassing many of the applications encountered in the physical and engineering sciences. Our approach will be based on a combination of ideas from delay-coordinate maps of dynamical systems <cit.>, kernel integral operators for machine learning <cit.>, and Galerkin approximation techniques for variational eigenvalue problems <cit.>. Using these tools, we will construct a compact kernel integral operator that commutes with the Koopman operator in an asymptotic limit of infinitely many delays, and employ the finite-dimensional common eigenspaces of these operators as Galerkin approximation spaces for the Koopman eigenvalue problem. We will show that orthonormal bases of these spaces can be stably and efficiently approximated from finitely many measurements taken near the attractor, and the resulting data-driven Galerkin schemes converge in the asymptotic limit of large data. § ASSUMPTIONS AND STATEMENT OF MAIN RESULTSA common underlying assumption in the statistical modeling of dynamical systems is ergodicity. This assumption encapsulates the working principle that the global properties (with respect to μ) of an observable F can be obtained from a time series for F, namely, F(x_0),…,F(x_-1), where x_0,…,x_-1 is an unobserved trajectory on the state space of the dynamical system. Moreover, ergodicity implies that L^2 inner products between observables can be approximated by time-correlations. Also, our methods rely on integral operators, and these can be approximated as matrices under the ergodic hypothesis. We now make our assumptions more precise.Let M be a topological manifold, equipped with its Borel σ-algebra. Φ^t:M→ M, t ∈, is a continuous flow on M with an ergodic, Borel probability measure μ with a compact support X. F:M→^ is a continuous measurement function through which we collect a time-ordered data set consisting of N samples F(x_0), F( x_1 ), …, F( x_N-1 ), each F(x_n) lying in -dimensional data space. Here, x_n = Φ^nΔ t( x_0 ), and Δ t is a fixed sampling interval such that the map Φ^Δ t is ergodic for the invariant measure μ.The Koopman operator. Central to all our following discussions will be the concept of the Koopman operator. Koopman operators <cit.> act on observables by composition with the flow map, i.e., by time shifts. The space L^2(X,μ) of square-integrable, complex-valued functions on X will be our space of observables. Given an observable f ∈ L^2( X, μ) and time t∈, U^t:L^2(X,μ) → L^2(X,μ) is the operator defined as(U^tf):x↦ f(Φ^t(x)), for μ-a.e. x ∈ X.U^t is called the Koopman operator at time t associated with the flow. For measure-preserving systems, U^t is unitary, and has a well-defined spectral expansion consisting in general of both point and continuous parts lying in the unit circle <cit.>. The problems of mode decomposition and non-parametric prediction can both be stated in terms of the Koopman operator <cit.>.We will now describe an important tool for studying Koopman operators, namely their eigenfunctions.Koopman eigenfunctions. Every eigenfunction z of U^t satisfies the following equation for some ω∈:U^tz=exp(iω t)z.Koopman eigenfunctions are particularly useful for prediction and dimension reduction in dynamical systems. This is because, as seen in (<ref>), the knowledge of an eigenfunction z at time t=0 enables accurate predictions of z up to any time t, since U^t operates on z as a multiplication operator by a time-periodic, single-frequency multiplication factor. Moreover, it is possible to construct a dimension reduction map, sending the high-dimensional data F( x ) ∈^d to the vector ( z_1( x ), …, z_l( x ) ) ∈ ℂ ^l, where l ≪ d, and the z_1, …, z_l are Koopman eigenfunctions corresponding to rationally independent frequencies ω_1, …, ω_l <cit.>. In this representation, the z_j can be thought of as “coordinates” corresponding to distinct periodic processes operating at the timescales 2π/ω_j. Also of interest (and in some cases easier to compute) are the projections of the observation map F onto the Koopman eigenfunctions, called Koopman modes <cit.>. Data-driven techniques for computing Koopman eigenvalues, eigenfunctions, and modes that have been explored in the past include methods based on generalized Laplace analysis <cit.>, dynamic mode decomposition (DMD) <cit.>, extended DMD (EDMD) <cit.>, Hankel matrix analysis <cit.>, and data-driven Galerkin methods <cit.>. The latter approach, as well as the related work in <cit.>, additionally address the problem of nonparametric prediction of observables and probability densities.Letbe the closed subspace of L^2(X,μ) spanned by the eigenfunctions of U^t, and ^ its orthogonal complement. As is well known <cit.>, and will be discussed in more detail in Section <ref>, the subspaces 𝒟 and 𝒟^⊥ represent the quasiperiodic and mixing (chaotic) components of the dynamics, respectively. Moreover, they are both invariant under U^t for every time t∈, thus inducing an invariant splitting <cit.>L^2(X,μ)=⊕^.Systems for whichcontains non-constant functions and ^ is non-empty are called mixed-spectrum systems.Kernel integral operators. The method that we will describe in this paper relies heavily on kernel integral operators. A kernel is a function k:M× M→, measuring the similarity between pairs of points on M. Kernel functions can be of various designs, and are meant to capture the nonlinear geometric structures of data; see for example <cit.>. One advantage of using kernels is that they can be defined so as to operate directly on the data space, e.g., k( x, y ) = κ( F( x), F( y ) ) for some function κ : ^d ×^d → of appropriate regularity. Defined in this manner, k can be evaluated using measured quantities F( x ) without explicit knowledge of the underlying state x. Associated with a square-integrable kernel k∈ L^2(X × X, μ×μ ) is a compact integral operator K:L^2(X,μ)→ L^2(X,μ) such that Kf(x) := ∫_X k(x,y)f(y) dμ(y).In some cases, we will make the following assumptions on kernels. The kernel k : M × M → is (i) symmetric and continuous; (ii) strictly positive-valued.Overview of approach. We will address the eigenvalue problem for U^t by solving an eigenvalue problem for a kernel integral operator P_Q, which is accessible from data, and in the limit of Q→∞, commutes with U^t. Since commuting operators have common eigenspaces, this will allow us to compute eigenfunctions of U^t through expansions in eigenbases obtained from P_Q. These operators have Markovian kernels p_Q : M × M → (i.e., p_Q ≥ 0 and ∫_X p_Q( x, · ) dμ = 1, for μ-a.e. x ∈ M), whose construction begins from a family of distance-like functions d_Q : M × M →, defined byd^2_Q(x,y) = 1 / Q ∑_q=0^Q-1‖ F(Φ^q Δ t(x)) - F(Φ^q Δ t (y)) ‖^2.Here, Q is a positive integer parameter, and ‖·‖ the canonical 2-norm on ^d. Intuitively, d_Q(x,y) assigns a distance-like quantity between points x and y equal to the root-mean square distance between Q consecutive “snapshots” of the observable F, measured along dynamical trajectories starting from x and y. In other words, d_Q corresponds to a distance between data in delay-coordinate space with Q delays. Several of our results will depend on the asymptotic behavior of d_Q as Q→∞, which we will study in detail.Composing d_Q with a continuous shape function h : →, leads to a kernel k_Q : M × M →, k_Q = h ∘ d_Q, assigning a pairwise measure of similarity between points in M. In this paper, we will nominally work with Gaussian shape functions, h(s) = e^-s^2/ϵ, parameterized by a bandwidth parameter ϵ > 0, so thatk_Q(x,y) =e^-d^2_Q(x,y) /ϵ. Such kernels satisfy Assumption <ref>(i), (ii). They are popular in manifold learning applications <cit.> due to their localizing behavior as ϵ→ 0 and their ability to approximate heat kernels, but our results also hold for many other kernel choices; e.g., <cit.>. Having constructed k_Q, the kernel p_Q associated with the integral operator P_Q is obtained via a Markov normalization procedure <cit.>, described in Section <ref>. With these definitions, we are ready to state our main results. Under Assumption <ref>, there exists a real, self-adjoint, ergodic, compact Markov operator P : L^2(X,μ ) → L^2( X, μ ), which commutes with U^t, and is a limit of operators P_1, P_2, … (also real, self-adjoint, ergodic, compact, and Markov) in the L^2(X,μ) operator-norm topology. The operators P_Q have Markov kernels p_Q : M × M → satisfying the conditions in Assumption <ref>, and determined from delay-coordinate mapped observations F(x), F(Φ^Δ t(x)), …, F( Φ^(Q-1)Δ t(x) ) with Q delays. Moreover, the kernel p : M × M → of P lies in L^∞(X× X, μ×μ ), and p_Q converges to p in L^p(X× X,μ×μ) norm with 1 ≤ p < ∞. The strong convergence of the compact operators P_Q to P leads to the following spectral convergence result (e.g., Section 7 in <cit.> and <cit.>).Under the assumptions of Theorem <ref>, the following hold: * For every nonzero eigenvalue λ of P with multiplicity α and every neighborhood S ⊂ of λ such that ( P ) ∩ S = {λ}, there exists Q_0 ∈_0 such that for all Q > Q_0, ( P_Q ) ∩ S contains α elements converging as Q →∞ to λ. * Let Π be any projector to the eigenspace W_λ of P at eigenvalue λ. Let also Π_Q be any projector to the union of the eigenspaces of P_Q corresponding to the eigenvalues in ( P_Q ) ∩ S. Then, as Q →∞, Π_Q converges strongly to Π. Moreover, the gap (distance) between W_λ and Π_Q, defined as in <cit.>, converges to zero.Theorem <ref> below is a continuation of Theorem <ref>, and can be used to conclude some useful properties of the operator P. Let Φ^t be a measurable flow on a compact set X supporting an invariant ergodic probability measure μ, and T be a kernel integral operator with a real-valued, symmetric kernel τ∈ L^2(X× X,μ×μ) such that T commutes with U^t (e.g., T=P from Theorem <ref>). Then:* τ lies in the tensor product subspace ⊗, and is invariant under the flow Φ^t×Φ^t. *and ^⊥ are invariant under T. Moreover, T is a subspace of , ^⊥ is a subspace of T, and both T and T are invariant under U^t. Moreover, if T contains non-constant functions:* There exists a measurable map π:X→D for some D∈, whose components consist of joint eigenfunctions of T and U^t, such that π factors Φ^t into a rotation on the torus by a vector ω⃗∈^D, i.e., π(Φ^t(x))=π(x)+ω⃗t1 for μ-a.e. x ∈ X.* There exists a choice of dimension D from (iii) and a symmetric kernel τ̂∈ L^2(D×D,) on the D-torus, such that τ(x,y)=τ̂(π(x),π(y)) for μ×μ-a.e. (x,y) ∈ X × X.Note that Theorems <ref> and <ref> hold for operators acting on L^2 spaces only. To be able to say more about the behavior of these operators on spaces of continuous functions, an additional assumption on the Koopman eigenfunctions and the observation map will be needed. In what follows, F_D will denote the orthogonal projection of F onto the quasiperiodic subspacefrom (<ref>).All Koopman eigenfunctions, as well as the quasiperiodic component of the observation map F_, are continuous. Although we will explicitly assume that F_ is continuous, we are not aware of a counter-example where the observation map F is continuous (in accordance with Assumption <ref>), the Koopman eigenfunctions z are continuous, but F_ is not continuous. On the other hand, smooth dynamical systems on smooth manifolds with discontinuous Koopman eigenfunctions (and in fact, pure point spectra) are known to exist, in both discrete- <cit.> and continuous-time settings <cit.>. This indicates that the continuity requirement on Koopman eigenfunctions in Assumption <ref> is complementary to the assumed continuity of the dynamical flow in Assumption <ref>. The following theorem establishes a number of properties of P under these additional continuity assumptions.Let Assumptions <ref> and <ref> hold. Then, the kernel p of the operator P from Theorem <ref> is uniformly continuous on a full-measure, dense subset of X × X. As a result:* P maps L^2( X, μ) into the space of μ-a.e. continuous functions on X. * P compactly maps C^0( X ) into itself. * The norms of the operators P in (i) and (ii) are bounded above by ‖ p ‖_L^∞(X × X).* For every f∈ C^0(X), P_Q f is a sequence of continuous functions converging μ-a.e. to P f. The class of integral operators P_Q studied in this work has previously been used for dimension reduction and mode decomposition of high-dimensional time series (e.g., <cit.>). In these works, a phenomenon called in <cit.> “timescale separation” was observed; namely, it was observed that at increasingly large Q the eigenfunctions of P_Q capture increasingly distinct timescales of a multiscale input signal. Theorems <ref> and <ref> provide an interpretation of this observation from the point of view of spectral properties of Koopman operators; in particular, from the fact that P_Q has, in the limit Q →∞, common eigenfunctions with U^t and the latter capture distinct timescales associated with the eigenfrequencies ω.Even though in this work we focus on the class of Markov operators P_Q, analogous results also hold for other classes of integral operators for data analysis that employ delays, including the covariance operators used in singular spectrum analysis (SSA) <cit.> and the related Hankel matrix analysis <cit.>. Collectively, these results establish a connection between two major branches of data analysis techniques for dynamical systems, namely those based on Koopman operators, and those based on kernel integral operators. Theorems <ref>–<ref>are proved in Section <ref>. A result analogous to Theorem <ref>, but restricted to smooth manifolds, smooth observation maps, and Koopman operators with pure point spectrum and smooth eigenfunctions, was presented in <cit.>. Theorem <ref> generalizes this result to non-smooth state spaces and Koopman operators with mixed spectra. With this result, the eigenvalues and eigenfunctions of P_Q consistently approximate those of P, and the latter can be used in turn to construct orthonormal bases of Koopman eigenspaces. The availability of such bases is useful in many applications, including approximation techniques for the eigenvalues and eigenfunctions of U^t or its generator (defined in Section <ref> ahead). One such technique will be presented in Section <ref>, utilizing the eigenvalues and eigenfunctions of P to perform diffusion regularization of the generator, and then solving the eigenvalue problem for the generator via a Petrov-Galerkin method. Note that the Markov property of P is not trivial; for instance, it does not hold for covariance kernels. The commutativity between U^t and P, in conjunction with the Markov property, lead to well posedness of these schemes despite the presence of a continuous spectrum of V.Physical measures. A point x∈ M is said to be in the basin of the measure μ with respect to the discrete-time map Φ^Δ t iflim_N→∞ 1 / N ∑_n=0^N-1 f(Φ^n Δ t( x)) = ∫_X f(y) dμ(y), ∀ f∈ C^0(M).The basinof an invariant ergodic measure μ always includes μ-a.e. point in the support of μ (in this case, X), and is a forward-invariant set. An important property that we need the invariant measure μ to have is that it is physical <cit.>. Moreover, we will require that the dynamics has a suitable absorbing ball property. These assumptions can be summarized as follows: The setof points satisfying (<ref>) has positive Lebesgue measure, i.e., the measure μ is physical. Moreover, there exists a subset 𝒱⊆ℬ_μ, also of positive Lebesgue measure, such that for every x_0 ∈𝒱 there exists a compact set(which may depend on x_0, and necessarily includes X), such that the orbit x_n = Φ^nΔ t(x_0) entersand never leaves it. Examples where Assumption <ref> is satisfied include: (i) ergodic flows on compact manifolds with Lebesgue absolutely continuous, fully supported, invariant measures, in which case 𝒰= 𝒱 =ℬ_μ = M = X; (ii) certain classes of dissipative flows on potentially noncompact manifolds (e.g., the Lorenz 63 (L63) system on M = ^3 <cit.> studied in Section <ref> ahead); and (iii) certain classes of dissipative partial differential equations possessing inertial manifolds and physical measures <cit.>.The following result shows that under Assumptions <ref>–<ref>, the nonzero eigenvalues of P_Q and the corresponding (continuous) eigenfunctions can be approximated to any degree of accuracy by data-driven operators P_Q,, acting on the finite-dimensional Hilbert space L^2(,μ_N) associated with the sampling probability measure μ_N = ∑_n=0^N-1δ_x_n / N. These operators are constructed from time-ordered measurements F(x_0),…,F(x_-1) of the observable F analogously to (<ref>)–(<ref>), replacing throughout integrals with respect to the invariant measure μ by integrals with respect to the sampling measure μ_N. Moreover, because P_Q and P_Q,N act on different Hilbert spaces, we will approach the problem of comparing their eigenvalues and eigenfunctions through integral operators P”_Q: C^0(𝒰) → C^0(𝒰) and P”_Q,N: C^0(𝒰) → C^0(𝒰), defined analogously to P_Q and P_Q,N, respectively, but acting on the same Banach space of continuous functions on 𝒰. A complete description of these constructions will be made in Section <ref>. Let Assumptions 1–4 hold. Then, for any initial point x_0 ∈𝒱,* Every eigenfunction of P_Q (P_Q,N) at nonzero eigenvalue extends to a continuous eigenfunction of P”_Q (P”_Q,N), corresponding to the same eigenvalue. * As N →∞, P”_Q, converges in spectrum to P”_Q in the sense of Corollary <ref>. Theorem <ref> will be proved in Section <ref>. Figure <ref> shows numerical eigenfunctions of P_Q,N obtained from data generated by two mixed-spectrum dynamical systems, described in (<ref>) and (<ref>), respectively. In both examples, we start with a C^∞ vector field V⃗ on a smooth manifold M. In the first example, M=X=4, so =X=M; in the second example, M=^3× S^1 and X= X_Lor× S^1 ⊂ M, where X_Lor is the Lorenz 63 attractor embedded in ^3. Eigenfunctions of the operator P_Q,N are then computed using a large number of delays, Q = 2000. Using the eigenvalues and eigenfunctions of P_Q,N, we will also construct data-driven Galerkin schemes for the eigenvalue problem of the generator, which are structurally identical to its counterparts formulated in terms of the eigenvalues and eigenfunctions of P. Because we do not assume a priori knowledge of the vector field of the dynamics and/or closed-form expressions for the eigenfunctions of P_Q,N, these schemes will estimate the action of the generator on eigenfunctions through finite-difference approximations at the sampling interval Δ t. In effect, Δ t will play the role of an additional asymptotic approximation parameter, such that the data-driven solutions converge in a suitable joint limit of vanishing sampling interval (Δ t → 0), large data (N→∞), infinitely many delays (Q→∞), and infinite Galerkin approximation space dimension. This convergence result, along with minimal regularity requirements on the dynamical flow and the kernel, will be stated in a precise manner in Proposition <ref> and Assumption <ref>, respectively. Note that, intuitively, our data-driven Galerkin framework for the generator V requires Δ t as an additional approximation parameter over methods that approximate the Koopman subgroup generated by U^Δ t at a fixed time step Δ t, since V encodes the information of the entire Koopman group, parameterized by the real time parameter t.Outline of the paper. In Section <ref>, we review some important concepts from the spectral theory of dynamical systems. In Section <ref>, we construct the integral operator P_Q, which is the key tool of our methods and is also the operator described in Theorems <ref>–<ref>. Next, we prove these theorems and Corollary <ref> in Section <ref>. In Section <ref>, we present a Galerkin method for the eigenvalue problem for the Koopman generator, with a small amount of diffusion added for regularization, formulated in the eigenbasis of P. In Section <ref>, we introduce the data-driven realization of P_Q, and establish the spectral convergence properties stated in Theorem <ref>, along with the convergence properties of the associated data-driven Galerkin scheme for the generator. In Section <ref>, the methods are applied to two mixed-spectrum flows, followed by a discussion of the results.§ OVERVIEW OF SPECTRAL METHODS FOR DYNAMICAL SYSTEMS In this section, we review some concepts from the spectral theory of dynamical systems and establish some facts about Koopman eigenfunctions. Henceforth, we use the notations ⟨ f, g ⟩ = ∫_X f^* g dμ and ‖ f ‖ = ⟨ f, f ⟩^1/2 to represent the inner product and norm of L^2( X,μ), respectively.Generator of a flow. By continuity of the flow Φ^t, the family of operators U^t is a strongly continuous, 1-parameter group of unitary transformations of the Hilbert space L^2(X,μ). By Stone's theorem <cit.>, any such family has a generator V, which is a skew-adjoint operator with a dense domain D( V ) ⊂ L^2(X,μ), defined as V f:=lim_t→ 0 1 / t (U^t f - f),f ∈ D( V ).The operators U^t and V share the same eigenfunctions; in particular, z ∈ D(V) with U^t z = e^iω tz satisfiesVz=iω z.In light of (<ref>) and the above relation, we can interpret the quantity ω∈ as a frequency intrinsic to the dynamical system (which we sometimes refer to as an “eigenfrequency”).Vector fields as generators. If we start with a vector field V⃗ on a C^1 manifold M, then under appropriate regularity conditions (for example, V⃗ is locally Lipschitz continuous and satisfies suitable growth bounds at infinity), this vector field induces a C^1 flow Φ^t : M → M defined for all t ∈. Suppose that there is a compact invariant set X⊆ M with an ergodic invariant measure μ. This set X is not necessarily a submanifold, and may not even have any differentiability properties. Nevertheless, (X,Φ^t,μ) is an ergodic dynamical system with an associated strongly-continuous, unitary group of Koopman operators U^t. Acting on C^1(M) functions restricted to X, the generator V of this group coincides with the vector field V⃗, the latter viewed as an operator V⃗ : C^1(M) → C^0(M). For example, in quasiperiodic systems, X=M=m, V⃗ generates a rotation, and μ is equivalent to the Lebesgue volume measure. On the other hand, for the Lorenz attractor (see (<ref>)), M=^3, V⃗ is smooth and dissipative, X is a compact subset with non-integer fractal dimension <cit.>, and μ is supported on X. Eigenfunctions as factor maps. We state the following properties of a Koopman eigenfunction z of an ergodic dynamical system.* If z corresponds to a nonzero eigenfrequency ω, then it has zero mean with respect to the invariant measure μ. This can be concisely expressed as ⟨ 1,z ⟩=0.* The flow Φ^t is semi-conjugate to the irrational rotation by ω t on the unit circle, with z acting as a semiconjugacy map. This follows directly from (<ref>). Since the eigenfunctions are L^2 equivalence classes, the semiconjugacy is measure-theoretic (holds μ-a.e.), but would be C^r if the eigenfunctions have a C^r representation.* Normalized eigenfunctions with ‖ z ‖ = 1 have | z( x) | = 1 for μ-a.e. x ∈ X, by (<ref>). As a result, the map z can now be viewed as a projection onto a circle in a measure-theoretic sense, z(x)∈ S^1 for μ-a.e. x ∈ X. Eigenfunctions form a group. Another important property of Koopman eigenfunctions for ergodic dynamical systems is that they form a group under multiplication. That is, the product of two eigenfunctions of U^t is again an eigenfunction, because of the following relation:U^t z_i=exp(itω_i) z_i,i∈{1,2 }, U^t(z_1 z_2) = (U^t z_1)(U^t z_2) = exp(it(ω_1+ω_2)) z_1 z_2.Moreover, an analogous relation holds for the eigenfunctions and eigenvalues of V. The fact that products of Koopman eigenfunctions are Koopman eigenfunctions leads to thefollowing result about products of elements ofwith elements of ^. Let Φ^t be an ergodic flow on a probability space (X,μ) such that U^t has a mixed spectrum. Then, for every f∈ and g∈^⊥ for which fg ∈ L^2( X, μ ), fg lies in ^. The eigenvalues of V are closed under integer linear combinations. Moreover, if all corresponding eigenfunctions are continuous, they are generated by a finite set of rationally independent eigenvalues iω_1,…,iω_m. That is, every eigenvalue of V is simple, and has the form ω_a⃗ = ∑_j=1^m a_j ω_j for some a⃗ = (a_1,…,a_m)∈^m. Moreover, the corresponding eigenfunction is given by z_a⃗ = ∏_j=1^m z_1^a_1⋯ z_m^a_m,where z_j is the eigenfunction at eigenvalue i ω_j. By virtue of (<ref>) the evolution of every observable f ∈𝒟 under U^t has the closed-form expressionU^tf = ∑_a⃗∈ℤ^mf̂_a⃗ e^i ω_a⃗ t z_a⃗, f̂_a⃗ = ⟨ z_a⃗, f ⟩,which can be evaluated given knowledge of finitely many generating eigenfunctions and eigenfrequencies. The following is a generalization of Property 2 of Koopman eigenfunctions listed above. Given an arbitrary collection { z_a⃗_1, z_a⃗_2, …, z_a⃗_l} of l Koopman eigenfunctions, there exists a map π : X → ℂ ^l with π(x ) = ( z_a⃗_1( x ), …, z_a⃗_l( x ) ), for μ-a.e. x ∈ X, such that:* The image π( X ) is a torus of dimension D ≤min{ m, l }, with D = l if ω_a⃗_1, …, ω_a⃗_l are rationally independent.* The flow (Φ^t, μ ) on X is semi-conjugate to an ergodic rotation (Ω^t,) on 𝕋 ^D (i.e., π∘Φ^t = Ω^t ∘π, μ-a.e.) associated with a frequency vector whose components are a subset of {ω_a⃗_1, …, ω_a⃗_l}. * Every Koopman eigenfunction z whose corresponding eigenfrequency is a linear combination of the ω_a⃗_1, …, ω_a⃗_l satisfies z( x ) = ζ(π(x ) ) for μ-a.e. x ∈ X, where ζ∈ C^∞(𝕋^D ) is a smooth Koopman eigenfunction of the ergodic rotation on the D-torus corresponding to the same eigenfrequency.If m>1, the set of eigenvalues { iω_a⃗}_a⃗∈ ℤ^m is dense on the imaginary axis. This property adversely affects the stability of numerical approximations of Koopman eigenvalues and eigenfunctions even in systems with pure point spectrum, necessitating the use of regularization <cit.>. We will return to this point in Section <ref>.Let Δ t > 0 be as in Assumption <ref>. Then, the orthogonal projection π_ω f of an observable f ∈ L^2(X,μ) onto the eigenspace of U^Δ t corresponding to the eigenvalue e^i ω Δ t of U^Δ t is given byπ_ωf = lim_N→∞1/N∑_n=0^N-1 e^-iωnΔt U^nΔtf. Moreover, π_ω≡ 0 if iω is not an eigenvalue of the generator. Otherwise, U^Δ tπ_ω f=e^iω Δ tπ_ω f.Mixing and weak mixing. An observable f∈ L^2(X,μ) is said to be mixing if for all g∈ L^2(X,μ), lim_t→∞⟨ g, U^t f ⟩=0; it is said to be weak-mixing if lim_t→∞ t^-1∫_0^T |⟨ g, U^s f ⟩|ds = 0. The latter, is equivalent to the requirement that for Lebesgue almost every Δ t∈, lim_N→∞N^-1∑_n=0^N-1|⟨ g, U^n Δ tf ⟩| = 0. The flow Φ^t is said to be (weak-) mixing if f is (weak-) mixing for all f∈ L^2(X,μ). It is known that every f∈^ is weak mixing (see, e.g., Mixing Theorem, p. 45 in <cit.>), whereas no observable inis weak-mixing. Thus, the component , often called the quasiperiodic subspace, shows no decay of correlation, unlike its complement ^, which represents the chaotic component of the dynamics. In addition, weak-mixing observables in ^⊥ and observables inhave a useful pointwise decorrelation property:Let f ∈^⊥ and g ∈. Then, for μ-a.e. x,y∈ X, lim_N→∞ 1 / N ∑_n=0^N-1 g^*( Φ^nΔ t(x ) ) f( Φ^nΔ t(y ) ) = 0.Without loss of generality, we may assume that g is an eigenfunction of U^Δ t with eigenvalue e^i ω Δ t. Then, lim_N→∞1/N∑_n=0^N-1 g^*( Φ^nΔ t(x )) f( Φ^n Δ t(y))= g^*(x) lim_N→∞1/N∑_n=0^N-1 e^-inω Δ t f( Φ^n Δ t(y)),which is equal to g^*(x)π_ω f (y) by Lemma <ref>. The latter is equal to zero since f∈^.§ KERNEL INTEGRAL OPERATORS FROM DELAY-COORDINATE MAPPED DATA§.§ Choice of kernelConsider a kernel integral operator of the class (<ref>) associated with an L^2 kernel k:M × M→. Then, the following properties hold (e.g., <cit.>): * K is a Hilbert-Schmidt, and therefore compact, operator on L^2( X, μ ), with operator norm bounded by ‖ k ‖_L^2(X× X).* If k is symmetric, then K is self-adjoint. * If k is C^0, then Kf is also C^0 for every f∈ L^2(X,μ).* If M is a C^r manifold and k is C^r, then Kf is also C^r for every f∈ L^2(X,μ). As stated in Section <ref>, we will work with kernels of the form k_Q( x, y ) = h( d_Q( x , y ) ),where h is a continuous shape function on , and d_Q : M× M →_0 is the distance-like function on M from (<ref>), parameterized by the number of delays Q. Kernels of this class are sometimes referred to as stationary kernels <cit.>, as they only depend on distances between datapoints. For example, in (<ref>), we used a Gaussian shape function, which is popular in manifold learning and other related geometrical data analysis techniques. Note that d_Q is symmetric, non-negative, and satisfies the triangle inequality, but depending on the properties of F and the number of delays it may vanish on distinct points. That is, d_Q is a pseudo-distance on M, induced from delay-coordinate mapped data with Q delays. The kernels in (<ref>) satisfy Assumption <ref>(i), and the associated kernel integral operators K_Q have all four properties listed above. In addition, if h is strictly positive, k_Q satisfies Assumption <ref>(ii). The behavior of integral operators associated with other classes of kernels, e.g., the covariance operators employed in SSA and Hankel matrix analysis induced by inner products in data space, can be studied via similar techniques to those presented below. However, it should be kept in mind that the Markov normalization procedure described in Section <ref> (which will be important for the well-posedness of the Galerkin schemes in Sections <ref> and <ref>) requires that the kernel be sign-definite. Another consideration to keep in mind is that the ability to approximate Koopman eigenfunctions with our techniques depends on the “richness” of the range of K_Q. As can be readily verified, the operator K_Q constructed from covariance kernels in d-dimensional data space (as in Assumption <ref>) has at most a dQ-dimensional range, whereas the corresponding operators associated with Gaussian kernels, as well as other non-polynomial kernels, have typically infinite-dimensional range for any Q. Our approach should also be applicable with little modification to families of kernels of the formk̃_Q(x,y) = 1/Q ∑_q=0^Q-1 h(d_1(Φ^q Δt(x),Φ^q Δt(y))),where averaging takes place after application of the shape function. Lemma <ref> below states some useful properties of K_Q associated with strictly positive kernels. In what follows, 1_S will denote the constant function equal to 1 on a set S.Under Assumptions <ref> and <ref>(ii), for any Q ∈, the functions ρ_Q = K_Q 1_X, and σ_Q =K_Q( 1/ρ_Q) are continuous and positive. Moreover, restricted on X, they are bounded away from zero. The claims follow directly by compactness of X and the fact thatk_Q |_X× X is a continuous function, bounded away from zero. Intuitively, ρ_Q can be thought of as a “sampling density” on X. For instance, if X were a manifold embedded in ^Qd by a delay-coordinate map constructed from F, then up to an ϵ-dependent scaling, ρ_Q would approximate the density of the invariant measure μ relative to the volume measure associated with that embedding. In a number of applications, such as statistical learning on manifolds <cit.>, one-parameter families of integral operators such as K_Q and P_Q are studied in the limit ϵ→ 0, where under certain conditions they can be used to approximate generators of Markov semigroups; one of the primary examples being the Laplace-Beltrami operator on Riemannian manifolds. Here, the fact that the state space X may not (and in general, will not) be smooth precludes us from taking such limits unconditionally. However, according to Theorem <ref>(ii), passing first to the limit Q →∞ allows one to view K and P as operators on functions on a smooth manifold, namely a D-dimensional torus, and study the small-ϵ behavior of these operators in that setting.§.§ Asymptotic behavior in the infinite-delay limit To study the behavior of K_Q in the limit of infinitely many delays, Q →∞, we first consider the properties of the pseudometric d_Q in the same limit. The latter can be studied in turn through a useful (nonlinear) map Ψ : C^0(X) → L^∞( X × X, μ×μ ), which maps a given observation function F into a (pseudo)metric on X, namely, (F)(x,y):= lim_Q→∞_Q(F)(x,y),_Q(F)(x,y) :=1 /Q∑_q=0^Q-1‖ F( Φ^qΔ t( x ) )-F( Φ^qΔ t (y) ) ‖^2.In what follows, d_X:X× X→ will denote the metric X inherits from M. Let Assumption <ref> hold, and F=F_ + F_^ be the L^2 decomposition of F from (<ref>). Then, (F) in (<ref>) is well-defined as a function in L^∞(X× X,μ×μ), and Ψ_Q(F) converges to Ψ(F) in L^p(X× X,μ×μ) norm for 1 ≤ p <∞. Moreover: * For every t ∈ and μ-a.e. x, y ∈ X, (F)(Φ^t(x), Φ^t(y))=(F)(x,y). * For μ-a.e. x,y∈ X, (F)(x,y) =(F_^)(x,y) + (F_)(x,y). * (F_^) is a constant almost everywhere and equals 2 F_^_L^2^2. Therefore, (F) = (F_) + 2 F_^_L^2^2.In particular, (F) ∈×.If, moreover, Assumption <ref> holds: * (F_)∈ C^0(X× X) and _Q(F_) converges to (F_) uniformly on X× X. * (F) is uniformly continuous on a full-measure, dense subset of X× X. * (F) has a unique continuous extension Ψ̅( F ) ∈ C^0(X× X), and Ψ_Q( F ) converges to Ψ̅( F ) μ-almost uniformly. To prove well-definition of Ψ, note that (F) exists μ-a.e. since it is the pointwise limit of the Birkhoff averages _Q(F) of the continuous function d_1: ( x, y ) →‖ F( x ) - F( y ) ‖ with respect to the product flow Φ^t ×Φ^t on X× X. By compactness of X× X, each of the functions _Q(F) is bounded above by ‖ d_1 ‖_C^0(X× X). Therefore, Ψ( F ) lies in L^∞( X × X, μ×μ ), and thus in L^p( X × X, μ×μ ), 1 ≤ p < ∞, since μ×μ is a probability measure. The Ψ_Q( F ) →Ψ(F) convergence in L^p(X× X, μ×μ ), 1 ≤ p < ∞, then follows from the L^p Von Neumann ergodic theorem.By the invariance of the infinite Birkhoff averages, (F) is invariant under the flow Φ^Δ t×Φ^Δ t. Thus (F) must lie in the kernel of V⊗ V and thus is invariant under the flow Φ^t×Φ^t for all t ∈, proving Claim (i).To prove Claim (ii), let x_q and y _q denote Φ^q Δ t(x) and Φ^q Δ t(y) respectively. Let G_:X× X→^d:= (x,y)→ F_(x_q) - F_(y_q), and similarly define G_^:X× X→^d. Expanding the right-hand side of (<ref>) gives,(F)(x,y)= lim_Q→∞ 1 /Q∑_q=0^Q-1( ‖ G_(x_q,y_q)‖^2 + ‖ G_^(x_q,y_q)‖^2 ) -2lim_Q→∞ 1 /Q∑_q=0^Q-1 G_(x_q,y_q) · G_^(x_q,y_q) ,and the first two terms in the equation above are (F_)(x,y) and (F_^)(x,y) respectively. Therefore, to prove Claim (ii), it suffices to prove that the third term vanishes. This is equivalent to showing that for μ-a.e. x,y∈ X, lim_Q→∞1 /Q ∑_q=0^Q-1 (F_^(x_q)-F_^(y_q))·(F_(x_q)-F_(y_q))=0,which follows from Lemma <ref>. This completes the proof of Claim (ii).To prove Claim (iii), let x_n and y_n denote Φ^n Δ t (x) and Φ^nΔ t(y), respectively. Then, (<ref>) can be rewritten for F_^ as( F_^)(x,y)= lim_N→∞1/N∑_n=0^N-1|F_^(x_n)|^2 + lim_N→∞1/N∑_n=0^N-1|F_^(y_n)|^2+ 2lim_N→∞1/N∑_n=0^N-1F_^(x_n)F_^(y_n).The first two terms converge to the constant F_^_L^2^2. It is therefore sufficient to show that the last term vanishes. Indeed, since the function J:(x,y)→ F_^(x)F_^(y) lies in the continuous spectrum subspace of the product-system (X× X,Φ^t×Φ^t,μ×μ), we have lim_N→∞1/N∑_n=0^N-1F_^(x_n)F_^(y_n) =⟨ J, 1_X× X⟩ = 0. Since F_ is continuous, (F_) is continuous by a classic result of Krengel (<cit.>, Theorem 1.2.7). This proves Claim (iv).Turning to Claim (v), it follows directly from Claims (iii) and (iv) that there exists a full-measure subset S ⊆ X × X on which k_∞ is uniformly continuous. Suppose that S were not dense in X × X. Then, there would exist an open set B ⊂ X × X disjoint from S, and with positive measure (since X × X is the support of μ×μ, and every open subset of the support of a Borel measure has positive measure), which would in turn imply that (μ×μ)(S) < 1, leading to a contradiction. Therefore, S is a full-measure, dense subset of X × X, completing the proof of the claim.Finally, the existence of Ψ̅( F ) in Claim (vi) follows from the fact that Ψ( F ) is uniformly continuous on the dense subset S of the compact metric space X × X, and the almost uniform convergence of Ψ_Q( F ) to Ψ̅( F ) is a consequence of Egorov's theorem.Although the measure μ×μ is invariant under Φ^t×Φ^t, it is not ergodic. In fact, it is ergodic iff (Φ^t,μ) is mixing (equivalently, U^t has purely continuous spectrum and a simple eigenvalue at 1), in which case the metric d_∞ would be constant almost everywhere, in accordance with (<ref>).Theorem <ref> establishes that the function d_∞ : D( d_∞ ) →, such thatd_∞(x,y) := lim_Q→∞ d_Q(x,y); ( x, y ) ∈D( d_∞) ⊆X ×Xis well-defined as a function in L^p( X × X, μ×μ ), 1 ≤ p ≤∞, with sup d_∞≤‖ d_1 ‖_C^0(X× X). It can also be verified that d_∞ satisfies the triangle inequality and is non-negative. However, depending on the properties of the dynamical system and observation map, it may be a degenerate metric as d_∞( x,y ) may vanish for some x≠ y, even if d_Q(x,y) is non-vanishing. In fact, it is easy to check that if y lies in the stable manifold of x, then d_∞(x,y)=0. Analogously to the finite-delay case in (<ref>), we employ d_∞ and the shape function h to define a corresponding kernel k_∞ : M × M →, where k_∞(x,y) = h( d_∞(x,y)),( x, y ) ∈ D( d_∞),and k_∞(x,y)=0 otherwise. We also let K be the kernel integral operator from (<ref>) associated with k_∞. Proposition <ref> shows that the operator K depends only on the quasiperiodic component of F, and is a direct consequence of Theorem <ref> and (<ref>).Let (X,Φ^t,μ) and F be as in Theorem <ref>. Then, the integral operator K is a constant scaling operator iff its kernel k_∞ is a constant μ-a.e., which occurs iff F_ is a constant. In general, k_∞ may not be continuous. Nevertheless, it has a number of other useful properties, which follow directly from Theorem <ref> in conjunction with the boundedness and continuity of the Gaussian shape function.Under Assumption <ref>, the following hold: * k_∞ is the L^p(X,μ)-norm limit, 1 ≤ p < ∞, of the sequence of continuous kernels k_1, k_2,….* k_∞ is invariant under U^t × U^t for all t ∈. * k_∞ lies in L^∞( X × X, μ×μ ), and under Assumption <ref>(ii), 1/k_∞ also lies in that space.Moreover, if Assumption <ref> additionally holds: * k_∞ is uniformly continuous on a dense, full-measure subset of X × X. * k_∞ has a unique continuous representative k̅_∞∈ C^0(X× X), and as Q →∞, k_Q converges to k_∞ almost uniformly.The stronger regularity properties of k_∞ under Assumption <ref> have the following important implications on the behavior of the corresponding integral operator. Under Assumptions <ref> and <ref>, the kernel integral operator K associated with k_∞ has the following properties:* For every f ∈ L^2( X, μ ), K f has a unique continuous representative. * For every f ∈ C^0( X ), K f is continuous. * ‖ K ‖≤‖ k_∞‖_L^∞(X× X) in either L^2 or C^0 operator norm.* As an operator on C^0(X), K is compact.* For every f∈ C^0(X), K_Q f is a sequence of continuous functions converging μ-a.e. to K f.(i) Since k_∞ is uniformly continuous on a set S ⊆ X × X of full μ×μ measure, there exists a full μ-measure set X'⊆ X, such that for every x∈ X', k_∞(x,·) is continuous μ-a.e. on X. Moreover, proceeding analogously to the proof of Theorem <ref>(v), it can be shown that X' is dense in X. Let now f∈ L^2(X,μ), f_L^2 =1. Then, for every x_1, x_2∈ X',| K f(x_1) - K f(x_2) |= | ∫_X' [k_∞(x_1,y) - k_∞(x_2,y) ] f(y) dμ(y) | ≤ k_∞(x_1,·) - k_∞(x_2,·)_L^2 f_L^2≤ k_∞(x_1,·) - k_∞(x_2,·)_L^∞.Since k_∞ is uniformly continuous on S, for every ϵ>0, there exists δ>0 such that if d_X(x_1,x_2) < δ, k_∞(x_1,·) - k_∞(x_2,·)_L^∞ < ϵ. Thus, for all such x_1 and x_2, we have | K f(x_1) - K f(x_2) | < ϵ, which implies that K f, restricted to X', is uniformly continuous. As a result, since X' is dense in the compact metric space X, K f|_X' has a unique continuous extension g ∈ C^0(X). Moreover, since X' has full measure, g lies in the same L^2 equivalence class as K f, proving the claim. (ii) Since k_∞ is uniformly continuous on a dense set of full measure, for any f ∈ C^0(X), the function g : X × X →ℂ with g(x, y ) = k_∞(x,y)f(y) has a unique continuous representative g̅∈ C^0(X× X). Therefore, for every x ∈ X, the function k_∞(x,·)f is μ-a.e. equal to g̅( x, · ) by μ-a.e. continuity of k_∞(x, · ), andK f(x ) = ∫_X k_∞(x,y)f(y) dμ(y) = ∫_X g̅_∞(x,y)f(y) dμ(y).It then follows that K f is continuous by continuity of integrals of X-sections of continuous functions on X × X. (iii) To verify the claim on the L^2 and C^0 operator norms, observe that for every f ∈ L^2( X, μ ) and x∈ X', where X' is as in the proof of Claim (i),| K f(x) |≤| ∫_X' k_∞(x,y)f(y) dμ(y) | ≤ k_∞(x,·) _L^2 f_L^2≤ k_∞(x,·)_L^∞‖ f ‖_L^2≤k_∞_L^∞(X× X)‖ f ‖_L^2,and therefore ‖ K f ‖_L^∞≤‖ k_∞‖_L^∞(X× X)‖ f ‖_L^2.The bound on the L^2 operator norm follows by setting ‖ f ‖_L^2 = 1 in (<ref>), together with the fact that ‖ K f ‖_L^2≤‖ K f ‖_L^∞. The bound on the C^0 operator norm follows from (<ref>) with f ∈ C^0(X), in conjunction with the facts that ‖ f ‖_L^2≤‖ f ‖_C^0 and ‖ K f ‖_L^∞ = ‖ K f ‖_C^0. (iv) Since, by the Arzelà-Ascoli theorem, every equicontinuous sequence of functions on a compact metric space has a limit point, it suffices to show that for every sequence f_n ∈ C^0( X ) with ‖ f_n ‖_C^0≤ 1, the sequence g_n = K f_n has a limit point with respect to C^0 norm. Let k̅_∞∈ C^0( X × X ) be the unique continuous representative of k_∞. For every x_1, x_2 ∈ X, we have|g_n(x_1) - g_n(x_2) |≤‖k̅_∞(x_1, ·) - k̅_∞( x_2, ·) ‖_C^0,and by uniform continuity of k̅_∞, for any ϵ > 0, there exists δ > 0, independent of n, such that, for every x_1, x_2 ∈ X with d(x_1,x_2) < δ, | g_n(x_1 ) - g_n(x_2) | < ϵ. This establishes equicontinuity of g_n, and thus compactness of K on C^0(X).(v) The continuity of K_Q f and Kf follows from Claim (ii). The μ-a.e. convergence follows from Lemma <ref>(v). We end this section with two important corollaries of Theorem <ref> and Lemmas <ref>, <ref>, which are central to both Theorems <ref> and <ref>.The operators U^t and K commute. Since μ is an invariant measure, for every x in X and t ∈ we have K f( x) = ∫_X k_∞(x, y) f( y ) dμ(y) = ∫_X k_∞(x, Φ^t( y )) f( Φ^t( y ) ) dμ(y). It therefore follows from Lemma <ref>(ii) thatK f( x ) = ∫_X k_∞( Φ^-t(x),y) f( Φ^t (y) ) dμ(y) = U^t* K U^tf(x),and the claim of the corollary follows.Under Assumptions <ref> and <ref>(ii), the function ρ = K 1_X is μ-a.e. equal to a constant bounded away from zero (i.e., 1/ρ lies in L^∞(X,μ)). Further, if Assumption <ref> holds, then ρ|_X and 1/ρ|_X are continuous.Corollary <ref> and the fact that U^t 1_X= 1_X imply that U^t ρ = ρ, and it then follows by ergodicity that ρ is constant μ-a.e. That ‖ 1/ρ‖_L^∞ is finite follows from Lemma <ref>(iii). Finally, the continuity of ρ under Assumption <ref> is a direct consequence of Lemma <ref>. §.§ Markov normalization Next, we construct the Markov operators P_Q and P appearing in Theorems <ref> and <ref> by normalization of K_Q and K. Throughout this section, we consider that Assumptions <ref> and <ref> hold. Under these assumptions, we employ a normalization procedure introduced in the diffusion maps algorithm <cit.> and further developed in <cit.>, although there are also other approaches with the same asymptotic behavior. Specifically, using the normalizing functions ρ_Q and σ_Q from Lemma <ref> and ρ from Corollary <ref>, we introduce the kernels p_Q : M × M → and p : M × M →, given by p_Q( x, y ) =k_Q(x, y ) /σ_Q( x ) ρ_Q(y ), p( x, y ) = k_∞(x, y )/ρ( x ),ρ(x) > 0,0,otherwise,respectively.By Lemma <ref>, p_Q satisfies the boundedness and continuity properties in Assumption <ref>. On the other hand, p is neither guaranteed to be continuous nor bounded on arbitrary compact sets, but it nevertheless follows from Lemma <ref> and Corollary <ref> that both p and 1/p lie in L^∞(X× X). Based on these facts, we can therefore define the kernel integral operators P_Q : L^2(X,μ ) → L^2(X,μ) and P : L^2(X,μ) → L^2(X,μ) from (<ref>) associated with the kernels p_Q and p, respectively, and these operators are both Hilbert-Schmidt (see Section <ref>). Note that p and P have analogous properties to those stated for k_∞ and K in Lemmas <ref>, <ref> and Corollary <ref>. In particular, p is invariant under U^t × U^t, and P commutes with U^t.The operators P_Q and P can also be obtained directly from K_Q and K, respectively, through the sequence of operationsK̃_Qf := K_Q(f / K_Q 1_X ),P_Qf = K̃_Q f /K̃_Q 1_X ,P f =K f / K 1_X.In <cit.>, the steps leading to K̃_Q from K_Q and to P_Q from K̃_Q are called right and left normalization, respectively. In the case of P, the effects of right normalization cancel since K 1_X is μ-a.e. constant by Corollary <ref>, so it is sufficient to construct this operator directly via left normalization of K.As is evident from (<ref>), P_Q and P are both Markov operators preserving constant functions. Moreover, for all x ∈ M we have ∫_X p_Q( x, · ) dμ = 1, and for μ-a.e. x ∈ M, ∫_X p( x, · ) dμ = 1, i.e., both p_Q and p are transition probability kernels. In particular, since X is compact and p_Q and p are essentially bounded below, P_Q and P are both ergodic Markov operators; that is, their eigenspaces at eigenvalue 1 are one-dimensional. The Markov kernel p is μ-a.e. symmetric by symmetry of k_∞ and the fact that ρ is μ-a.e. constant. As a result, P is self-adjoint, its eigenvalues admit the ordering 1 = λ_0 > λ_1 ≥λ_2 ≥⋯, and there exists a real orthonormal basis L^2(X,μ) consisting of corresponding eigenfunctions, ϕ_j, with ϕ_0 being constant. On the other hand, because p_Q is not symmetric, the operator P_Q is not self-adjoint, but is nevertheless related to a self-adjoint operator via a similarity transformation by a bounded multiplication operator with a bounded inverse. To verify this, defineσ̃_Q=σ_Q/ρ_Q,σ̂_Q= √(σ_Qρ_Q),where ρ_Q and σ_Q are as in Lemma <ref>. Let also D_Q be the multiplication operator which multiplies by σ̃_Q, and P̂_Q the kernel integral operator with kernel p̂_Q : M × M →p̂_Q(x,y) = k_Q(x,y)/σ̂_Q(x)σ̂_Q(y).Observe now that P̂_Q is a symmetric operator, and P_Q is related to P̂_Q via the similarity transformation P̂_Q =D_Q^1/2P_Q D_Q^-1/2;that is, for every f∈ L^2(X,μ),D_Q^1/2P_Q D_Q^-1/2 f(x)= ∫_X√(σ_Q(x)/ρ_Q(x))k_Q(x,y)/σ_Q(x) ρ_Q(y) f(y) √(ρ_Q(y)/σ_Q(y))dμ(y) = ∫_Xk_Q(x,y)/σ̂_Q(x)σ̂_Q(y) f(y) dμ(y) = P̂_Qf(x).The following are useful properties of P̂_Q that follow from its relation to P_Q. * P̂_Q has the same discrete spectrum as P_Q, consisting of eigenvalues λ_j,Q with 1=λ_0,Q > λ_1,Q≥λ_2,Q≥⋯. * Let ϕ_j,Q denote the eigenfunctions of P̂_Q corresponding to the nonzero eigenvalues λ_j,Q. These form an orthonormal basis for the closed subspace P̂_Q = (P̂_Q)^. Moreover, the ϕ_j,Q can be chosen to be real-valued.* The eigenfunction ϕ_0,Q of P̂_Q is equal up to proportionality constant to ρ_Qσ^1/2_Q. In applications, it may be the case that ρ_Q and 1/ ρ_Q take a large range of values. In such situations, it may be warranted to replace (<ref>) by a variable-bandwidth kernel of the form k_Q(x,y) = exp( -d^2_Q( x, y ) /ϵ r_Q( x ) r_Q(y) ), with a bandwidth function r_Q introduced so as to control the decay of the kernel away from the diagonal, x = y. Various types of bandwidth functions have been proposed in the literature, including functions based on neighborhood distances <cit.>, state space velocities <cit.>, and local density estimates <cit.>. While we do not study variable bandwidth techniques in this work, our approach should be applicable in that setting too, so long as Corollary <ref> holds.§ PROOF OF THEOREMS <REF>–<REF> AND COROLLARY <REF>Proof of Theorem <ref>. That P and U^t commute follows from the invariance of p under U^t × U^t and an analogous calculation to that in the proof of Corollary <ref>. Next, as Q→∞, p_Q converges to p in any L^p(X× X,μ×μ) norm with 1 ≤ p < ∞ by the analogous result to Lemma <ref>(i) that holds for these kernels (see Section <ref>). In particular, that p_Q converges to p in L^2(X× X,μ×μ) norm implies that P_Q converges to P in L^2(X,μ) operator norm, since P_Q - P is Hilbert-Schmidt and thus bounded in operator norm by ‖ p_Q- p ‖_L^2(X× X). Proof of Theorem <ref>. We first establish that τ is a.e. invariant under Φ^t ×Φ^t. Since the integral operator T commutes with U^t, for μ-a.e. x∈ X,∫_X τ(Φ^t(x),Φ^t(y'))f(Φ^t( y')) dμ(y') = ∫_X τ(Φ^t(x),y)f(y)dμ(y)= U^t Tf(x) = T(U^tf)(x) = ∫_X τ(x,y') f(Φ^t( y')) dμ(y'),where the second equality was obtained by the change of variables y= Φ^t(y'), and utilizes the invariance of the measure μ under Φ^t. The only way the terms at the two ends of the equation can be equal for μ-a.e. x∈ X is if τ(Φ^t(x),Φ^t(y')) = τ(x,y') μ-a.e.Next, observe that, by (<ref>), the space L^2(X× X, μ×μ) splits as the U^t × U^t-invariant orthogonal sum of ⊗, ^⊗^, ^⊗, and ⊗^. Since τ is an L^2 kernel, it has orthogonal projections onto each of these subspaces, all of which are U^t ⊗ U^t-invariant by the invariance of τ just established. By symmetry of τ, the projections onto ^⊗ and ⊗^ vanish. Moreover, the projection τ_^⊥⊗^⊥∈^⊥⊗^⊥ is orthogonal to constant functions, and it follows by the Birkhoff ergodic theorem that for μ×μ-a.e. x, y ∈ X × X, 0= ⟨ 1_X × X, τ_^⊥⊗^⊥⟩= lim_N→∞1/N∑_n=0^N-1τ_^⊥⊗^⊥ ( Φ^n Δ t (x), Φ^n Δ t (y) ) = lim_N→∞1/N∑_n=0^N-1τ_^⊥⊗^⊥ ( x, y ) = τ_^⊥⊗^⊥ ( x, y ).This completes the proof of Claim (i). The statements in Claim (ii) that ^⊂(T) and thatand ^⊥ are invariant under T are direct consequences of Claim (i). The remaining two claims in the theorem, which requires that bothand T contain non-constant functions, can be proved by means of the following, slightly stronger, result. For any nonzero eigenvalue λ of T, the corresponding eigenspace W_λ is invariant under the action of the Koopman generator V, and V|_W_λ is diagonalizable. Moreover, the constant function 1_X is an eigenfunction of T. If W_λ does not contain 1_X, its dimension is an even number.Since T is compact, every nonzero eigenvalue λ has finite multiplicity and its corresponding eigenspace W_λ has finite dimension, l =W_λ. Since U^t commutes with T, U^t and hence V leave W_λ invariant. Similarly, since the constant function is an eigenfunction of V, it is an eigenfunction of T. Let λ_0 be the eigenvalue of T corresponding to the constant eigenfunction, and λ≠λ_0 be any other eigenvalue of T. Then, V|_W_λ is a skew-symmetric operator on a finite-dimensional space, and thus can be diagonalized with respect to a basis of simultaneous eigenfunctions of T and V. Fix any element ζ of this basis. By our choice of λ, ζ is a non-constant eigenfunction of V, hence ⟨ζ,1⟩=0. Therefore, by ergodicity of ( Φ^t, μ ), Vζ=iωζ for some ω≠ 0. This implies that ζ has non-zero real and imaginary parts. Hence, the conjugate ζ^* is linearly independent from ζ and corresponds to eigenvalue -iω of V. However, since T is a real operator, ζ^* lies in W_λ. We therefore conclude that W_λ can be split into disjoint 2-dimensional spaces spanned by the conjugate pair of eigenfunctions ζ and ζ^*. Therefore W_λ is an even number.The representation of V|_ P in the basis {ϕ_0, ϕ_1, …} has a block-diagonal structure, consisting of even-sized blocks associated with the eigenspaces W_λ≠ 1, and a 1× 1 block with the element 0, associated with W_1. Moreover, the range of P lies in the domain of V, and V|_ P and P|_ P are simultaneously diagonalizable.By Proposition <ref>, U^t and T have joint eigenfunctions, each of which factors the dynamics into a rotation on the circle in accordance with (<ref>). According to Proposition <ref>, any collection of D such eigenfunctions factors the dynamics into a rotation on D. This proves Claim (iii).To prove Claim (iv), we use (<ref>) to expand the kernel asτ= ∑_a⃗, b⃗ ∈ℤ^mτ̃_a⃗ b⃗ z_a⃗ ⊗z_b⃗.In this expansion, there is a minimal number D≤ m of generating eigenfunctions z_j from (<ref>), arranged without loss of generality as z_1, …, z_D, such that the expansion coefficients τ̃_a⃗b⃗ corresponding to a⃗ = ( a_1, …, a_m ) and b⃗ = ( b_1, …, b_m ) with nonzero a_D+1, …, a_m and b_D+1, …, b_m, respectively, vanish (in other words, the kernel τ does does not project onto the subspaces generated by z_D+1,…, z_m and their powers). By Proposition <ref>, the Koopman eigenfunctions corresponding to non-vanishing τ̃_a⃗b⃗ can be expressed as z_a⃗ = ζ_a⃗∘π, where the ζ_a⃗ are smooth Koopman eigenfunctions on 𝕋^D associated with an ergodic rotation. Thus, denoting the index set for the nonzero τ̃_a⃗b⃗ by coefficients by I ∈ℤ^m ×ℤ^m, we have τ( x , y ) = τ̂( π( x ), π( y ) ) for μ×μ-a.e., (x,y) ∈ X × X, where τ̂ is the L^2 kernel on 𝕋^D given by τ̂= ∑_a⃗, b⃗ ∈I τ̃_a⃗ b⃗ ζ_a⃗ ⊗ζ_b⃗.This completes the proof of Claim (v) and of Theorem <ref>.Proof of Theorem <ref>. That p is uniformly continuous on a full-measure, dense subset of X× X follows from the analogous result to Lemma <ref>(iv), which holds for p (see Section <ref>). Claims (i)–(iv) of the theorem follow analogously to Lemma <ref>. Rates of convergence in the continuous case. As an auxiliary result, we state a lemma that establishes rates of convergence with respect to the number of delays Q of the kernel integral operators studied in this work.Let the assumptions of Theorem <ref> hold, and the shape function h from (<ref>) be continuously differentiable. Then, the following operators converge in C^0(X) operator norm to 0 as Q→∞, with rates given below: * U^Δ t K_Q - K_Q U^Δ t = O(Q^-1),* U^Δ tK̃_Q - K̃_Q U^Δ t =O(Q^-1),* U^Δ t P_Q - P_Q U^Δ t =O(Q^-1).Let F̃_Q,Δ t(x,y):=F(x) - F(y)-F(Φ^Q Δ t x) - F(Φ^Q Δ t (y)), and notice that by continuity of F and compactness of X this quantity is bounded on X× X. Note that (i) d_Q( Φ^Δ t (x), Φ^Δ t (y)) = d_Q(x,y)+Q^-1F̃_Q,Δ t(x,y); and (ii) h(u + Δ u) = h(u) +Δ uh'(u) + o(Δ u), as Δ u→ 0. Thus,k_Q(Φ^Δ t(x), Φ^Δ t(y))= h(d_Q( Φ^Δ t (x), Φ^Δ t (y))) = h(d_Q(x,y) + Q^-1F̃_Q,Δ t(x,y))= h(d_Q(x,y)) + O(Q^-1) = k_Q(x,y) + O(Q^-1),where the estimate holds uniformly with respect to x,y ∈ X. Therefore, for every f∈ L^2(X,μ) and x ∈ X we haveU^Δ tK_Q f(x)= ∫_X k_Q(Φ^Δ t(x),y)f(y)dμ(y) = ∫_X k_Q(Φ^Δ t(x),Φ^Δ t (y))f(Φ^Δ t (y)) dμ(y) = ∫_X[k_Q(x,y) + O(Q^-1) ] (U^Δ tf)(y) dμ(y).Note that we have used the fact that μ is an invariant measure in the second-to-last line. Since k_Q is continuous, it follows from the Cauchy-Schwarz inequality that K_Qf_C^0≤K_Q_C^0f_L^2. Substituting this result in the right-hand side and taking the supremum over x∈ X yields(U^Δ tK_Q - K_Q U^Δ t ) f _C^0 = O(Q^-1) f _L^2.Claim (i) then follows from the fact that ‖·‖_L^2≤‖·‖_C^0. Claims (ii) and (iii) can be proved in a similar manner. § GALERKIN APPROXIMATION OF KOOPMAN EIGENVALUE PROBLEMSIn this section, we formulate a Galerkin method for the eigenvalue problem of the Koopman generator V in the eigenbasis of P, under the implicit assumption that the latter operator is available to us from P_Q after having taken a large number of delays Q. The task of finding the eigenvalues of V has two challenges, namely, (i) V is an unbounded operator defined on a proper subspace D( V ) ⊂ L^2(X,μ) which is not known a priori; (ii) the spectrum of V could be dense in i (even for a pure point spectrum system such an ergodic rotation on 𝕋 ^D with D ≥ 2; e.g., <cit.>, Remark 8), in which case, solving for its eigenvalues is a numerically ill-posed problem. Following <cit.>, we will address these issues by employing a Galerkin scheme for the eigenvalue problem of V, with a small amount of judiciously constructed diffusion added for regularization. Throughout this section, we consider that Assumptions <ref>, <ref>, and <ref> hold. Further, we assume the following.The kernels k_Q, and thus k_∞, are symmetric positive-definite. That is, (i) k_Q(x,y), for every x,y ∈ M; (ii) for every x_0, x_1, …, x_n ∈ M and c_0, c_1, …, c_n ∈ℂ, ∑_i,j=0^n-1 c_i^* k_Q(x_i,x_j) c_j ≥ 0; and (iii) the analogous conditions hold for k_∞.Our approach has the following steps. Step 1. Sobolev spaces. We first construct subspaces of L^2 in which we search for eigenfunctions. These spaces will be shown to be dense in , defined as the closed subspace of P orthogonal to constant functions (that is,only consists of zero-mean functions). Note that {ϕ_j }_j ∈ J, where J is an index set for the nonzero eigenvalues λ_j of P, strictly less than 1, is an orthonormal basis of H. For any p ≥ 0, we defineH^p = {∑_j∈ J c_jϕ_j ∈ : ∑_j∈ J| c_j |^2 |η_j| ^p<∞}, η_j = ( λ^-1_j - 1) / ( λ_1^-1 - 1 ). The spaces H^p are analogous to the usual Sobolev spaces associated with self-adjoint, positive semidefinite, unbounded operators with compact resolvents and discrete spectra (here, {η_j }_j∈ J). In particular, when (X,g) is a smooth Riemannian manifold with a metric tensor g satisfying _g = μ, and ( η_j, ϕ_j ) are the eigenvalues and orthonormal eigenfunctions of the corresponding Laplace-Beltrami operator, then H^p becomes the canonical Sobolev space H^p( X, g), restricted to be orthogonal to constant functions. H^p from (<ref>) is a Hilbert space with the inner product⟨ f,g ⟩_H^p:= ∑_q=0^p∑_j ∈ J c_j^* d_j|η_j |^q,where f=∑_j ∈ Jc_jϕ_j and g=∑_j ∈ Jd_jϕ_j. Moreover, {ϕ_j^(p)}_j∈ J with ϕ_j^(p) = ϕ_j/ ‖ϕ_j ‖_p, ‖ϕ_j ‖_H^p^2 = ∑_q=0^p λ_j^q, forms an orthonormal basis of H^p.For every p>0, the space H^p is dense inand moreover, the inclusion map H^p→, and thus H^p → L^2(X,μ), is compact. To see that H^p is dense, note that H^p includes all finite linear combinations of the ϕ_j. Since the ϕ_j are an orthonormal basis of , these finite linear combinations are dense in . Next, the embedding of H^p incan be represented by a diagonal operator G : H^p → such that G_jj := ⟨ϕ_j, G ϕ_j^(p)⟩ = η_j^-p/2. This operator is compact iff G_jj converges to 0 as j →∞. This is true by (<ref>) below. The compactness of the inclusion H^p → L^2(X,μ) follows immediately. Step 2. Regularized generator.For every θ>0, we define the unbounded operators Δ : D(Δ ) → H and L_θ : D( L_θ) → H, where D(Δ) = D(L_θ) ⊂ D(V), andΔ := f↦∑_k=1^∞η_j ⟨ϕ_j, f ⟩ϕ_j,L_θ := V|_D(Δ) - θΔ. As we will see in Step 3 below, the role of the diffusion term θΔ is to penalize the eigenfunctions of V with large eigenvalues of a Dirichlet energy functional. Theorem <ref> below identifies a domain in which the operators in (<ref>) are continuous, and establishes that the eigensolutions of L_θ converge to eigensolutions of V as θ→ 0. Viewed as operators from H^2 to H, the generator V, as well as the operators L_θ and Δ from (<ref>), are bounded. In particular, we can set D( Δ ) = D( L_θ ) = H^2. Finally, for every eigenvalue i ω of V, whose corresponding eigenspace lies in H^2, there exists an eigenvalue η of Δ such that the smooth curve θ↦γ_θ := i ω - θη consists of eigenvalues γ_θ of L_θ, converging to iω as θ→ 0^+. First, by Corollary <ref>, we can consider that the basis {ϕ_j }_j∈ J of H consists of simultaneous eigenfunctions of V and P (and thus Δ), without loss of generality. Then, to verify that V is a bounded operator on H^2, first observe that ω_j = O(j), which follows from the fact that the eigenvalues of V are integer linear combinations of finitely many rationally independent frequencies (by Assumption <ref>; see also Section <ref>). By Theorem <ref>, the kernel p associated with P is L^2 integrable, and thus by a result of Ferreira and Menegatto on integral operators (<cit.>, Corollary 2.5), λ_j=o(j^-1). Combining these estimates, we obtainj=o(η_j), ω_j=o(η_j),η_j^-1 = o(j^-1),and therefore deduce that there exists a constant C>0 such that ω_j ≤ C η_j; ∀ j∈ J.Hence, for f=∑_j ∈ Jc_j ϕ_j∈ H^2,‖ Vf‖^2 = ‖∑_j ∈ Jc_j V ϕ_j‖^2 = ‖∑_j ∈ J ic_j ω_jϕ_j‖^2 ≤ C^2∑_j ∈ J| c_j |^2 |η_j|^2 ≤ C^2‖ f‖^2_H^2,proving that V is a bounded operator on H^2. The same reasoning applies for L_θ and Δ.Finally, to establish convergence of the eigenvalues of L_θ to those of V|_H^2, let i ω_j be the eigenvalue of V corresponding to ϕ_j. Then, by definition of L_θ and the basis {ϕ_j }_j∈ J, L_θ ϕ_j = V ϕ_j - θΔϕ_j = ( iω_j-θη_j) ϕ_j, and the claim follows immediately. This completes the proof of Theorem <ref>. Theorem <ref> establishes that H^2 is a domain on which V is a bounded operator, but if X had a smooth manifold structure, it is possible to show that the standard H^1 Sobolev space associated with a Riemannian metric on X is also a suitable domain. In this work, X has no smooth structure, and we can state Theorem <ref> above only for V|_H^2. In separate calculations, we have observed that an analog of the weak eigenvalue problem for L_θ formulated in H^1 × H^1 actually performs well numerically. Step 3. Galerkin method. By virtue of Theorem <ref>, the eigenvalues of L_θ can be considered to be approximations of the eigenvalues of V. We will take the Galerkin approach in finding the eigenvalues of L_θ by solving for z∈ H^2 and γ∈ in the following variational (weak) eigenvalue problem:Find γ∈ ℂ and z ∈ H^2 such that for all f ∈ H, A( f, z ) = γ⟨f, z ⟩,where A : H × H^2 →ℂ is the sesquilinear form defined by A( g, f ) = ⟨g, L_θ f ⟩= ⟨g, Vf ⟩- θE(g,f), E( g,f ) = ⟨g, Δf ⟩. In the above, the form E:H × H^2 →ℂ induces a Dirichlet energy functional E( f ) = E( f, f), f ∈ H^2, providing a measure of roughness of functions in H^2. In particular, if X were a smooth Riemannian manifold, and the ( η_j, ϕ_j ) were set to Laplace-Beltrami eigenvalues and eigenfunctions, respectively, we would have E( f ) = ∫_X ‖ f ‖^2 dμ. While the lack of smoothness of X in our setting precludes us from defining E by means of a gradient operator, its definition in terms of the η_j from (<ref>) still provides a meaningful measure of roughness of functions. For instance, it follows from results in spectral graph theory that the variance of estimates η_j^(N) of the η_j computed from finite data sets (e.g., as described in Section <ref> ahead) increases with k <cit.>, which is consistent with the intuitive expectation that rough (highly oscillatory) functions require larger numbers of samples for accurate approximations. Following <cit.>, we will order all solutions ( γ_j, z_j ) of the problem in Definition <ref> in order of increasing Dirichlet energy E( z_j ). Since A( f, f ) = - θ E( f, f) by skew-symmetry of V, we can compute the Dirichlet energy of eigenfunction z_j directly from the corresponding eigenvalue, viz. E( z_j ) = - γ_j / θ. Similarly, we have ω_j =γ_j. By (<ref>), there exist constants C_1, C_2>0 such that C_2 ≤|iω_j-θη_j|/|η_j|≤ C_1, ∀ j∈ J. To justify the well-posedness of the eigenvalue problem in Definition <ref>, we will state three important properties of A, namely, |A(u,v)|≤ C_1 u_Hv_H^2, ∀ u∈ H, ∀ v∈ H^2, sup_f∈ H f_H=1|A(f,v)| ≥ C_2v_H^2^2, ∀ v∈ H^2, sup_g∈ H^2 g_H^2=1|A(u,g)| ≥ C_2 u_H^2, ∀ u∈ H.We now give brief proofs of these results. In the following, v=∑_j ∈ Jd_j ϕ_j and u=∑_j ∈ Jc_jϕ_j will be arbitrary functions in H^2 and H, respectively. Moreover, as in the proof of Theorem <ref>, we will assume that the basis {ϕ_j }_j∈ J consists of simultaneous eigenfunctions of V and Δ. First, note that,| A(u,v) | =| ∑_j ∈J(iω_j-θη_j) c_j^* d_j | ≤∑_j ∈J|iω_j-θη_j | |c_j^* d_j|.By the Cauchy-Schwartz inequality on ℓ^2 and (<ref>),| A(u,v) | ≤C_1∑_j ∈J|η_j ||c^*_j d_j |≤C_1 ‖u‖_H‖v‖_H^2,proving (<ref>). To prove (<ref>), let f=∑_j∈ Ja_jϕ_j∈ H. Then, the left-hand side of that equation becomes ∑_j ∈ J(iω_j/η_j-θ) η_j a_j^* d_j. Let R_j := iω_j/η_j-θ, where |R_j|≥ C_2 by (<ref>), By the Cauchy-Schwarz inequality, under the constraint ∑_j ∈ J|a_j|^2=1, the sum | ∑_j ∈ J a_j^* η_jd_j | attains the maximum value of ∑_j ∈ J|η_j^2 d_j|^2. Therefore,sup_f∈ H f_H=1| A(f,v) | = sup_∑_j ∈ J |a_j|^2=1| ∑_j ∈ J a_j^* d_j R_j η_j | ≥ C_2 ∑_j ∈ J |η_j d_j|^2 = C_2 v^2_H^2.This proves (<ref>). The proof of (<ref>) is similar to that of (<ref>), with f replaced by a trial function g=∑_j ∈ Jb_j ϕ_j∈ H^2 and the constraint g_H^2^2 = ∑_j ∈ J|b_j|^2 = 1. A direct consequence of (<ref>) and (<ref>) is,inf_v∈ H^2 v_H^2=1sup_u∈ H u_H=1|A(u,v)| ≥ C_2, inf_u∈ H u_H=1sup_v∈ H^2 v_H^2=1 |A(u,v)| ≥ C_2. Equations (<ref>), (<ref>), (<ref>), and the compact embedding of H^2 inby Proposition <ref> together guarantee that the eigenvalues of A restricted to the finite-dimensional subspaces of H × H^2 spanned by the leading m eigenfunctions ϕ_1, …, ϕ_m converge, as m →∞, to the weak eigenvalues of L_θ. See <cit.>, Section 8, for an exposition on this classic result. The resulting finite-dimensional Galerkin approximations of the weak eigenvalue problem for L_θ can be summarized as follows: Set H̃_m = {ϕ_1, …, ϕ_m } and H̃^2_m = {ϕ^(2)_1, …, ϕ^(2)_m }, m ≥ 1. Then, find γ∈ ℂ and z ∈H̃^2_m such that for all f ∈H̃_m, A( f, z ) = γ⟨f, z ⟩,where the sesquilinear form A : H × H^2 →ℂ is as in Definition <ref>.This problem is equivalent to solving a matrix generalized eigenvalue problemAc⃗ = λ Bc⃗,where A and B are m × m matrices with elementsA_ij = A( ϕ_i, ϕ_j^(2) ) =V_ij/η_j- θΔ_ij,V_ij = ⟨ϕ_i, V ϕ_j ⟩, Δ _ij =δ_ij,B_ij = ⟨ϕ_i, ϕ_j^(2)⟩ = η_i^-1δ_ij,respectively, and c⃗ = ( c_1, …, c_m )^⊤ is a column vector in ℂ^m containing the expansion coefficients of the solution z in the {ϕ_j^(2)} basis of H̃^2_m, viz. z = ∑_k=1^m c_j ϕ_j^(2). It is important to note that, unlike the proofs of Theorem <ref> and (<ref>)–(<ref>), in (<ref>) we do not require that the ϕ_j be simultaneous eigenfunctions of V and P. This concludes the description of our Galerkin approximation of the eigenvalue problem for L_θ and therefore for V.§ DATA-DRIVEN APPROXIMATIONIn this section, we discuss the numeric procedures used to approximate the integral operators described in Sections <ref>, <ref>, and implement the Galerkin method of Section <ref> using a finite, time-ordered dataset of observations (F(x_n))_n=0^-1. In addition, we will prove Theorem <ref>. Throughout this section, we will assume that Assumptions <ref>–<ref> hold. In particular, by Assumption <ref>, we can assume without loss of generality that the underlying trajectory (x_n)_n=0^-1 starts at a point x_0 in the compact set(for, if x_0 were to lie in 𝒱∖, the trajectory would enterafter finitely many steps, and its portion lying in 𝒱∖ would not affect the asymptotic behavior of our schemes as N→∞). Besides this assumption, the trajectory (x_n)_n=0^-1 is assumed to be unknown, and note that it need not lie on X.For the purposes of the analysis that follows, it will be important to distinguish between operators that act on L^2 and C^0 spaces. Specifically, to every kernel k : M × M → satisfying Assumption <ref>, we will assign a bounded operator K' : L^2(X,μ) → C^0(), acting on f ∈ L^2(X,μ) via the same integral formula as in (<ref>), but with the image K'f understood as an everywhere-defined, continuous function on . With this definition, the operator K : L^2(X,μ) → L^2(X,μ) acting on L^2 equivalence classes can be expressed as as K” = ι∘ K', where ι: C^0() → L^2(X,μ) is the canonical L^2 inclusion map on C^0(), and we can also define an analog K” : C^0() → C^0() acting on continuous functions via K” = K' ∘ι. It can be verified using the Arzelà-Ascoli theorem that K” is compact.Data-driven Hilbert spaces.Let μ_ := N^-1∑_n=0^-1δ_x_n be the sampling probability measure associated with the finite trajectory (x_n)_n=0^N-1. The compact setfrom Assumption <ref> always contains the support of μ_N. Moreover, since x_0 lies in the basin of the physical measure μ, as →∞, μ_ converges weakly to μ, in the sense that lim_N→∞∫_ f dμ_ = ∫_X f dμ, ∀ f ∈ C^0().Our data-driven analog of the space L^2(X,μ) will be L^2(,μ_N); the set of equivalence classes of complex-valued functions on M which are square-summable and have common values at the sampled states x_n. Note that L^2(,μ_)≅^, and therefore every element f ∈ L^2(,μ_) can be represented in the canonical basis of ℂ ^N as an -vector f⃗ = (f(x_0),… ,f(x_-1)). In fact, L^2(,μ_) is the image of C^0() under the restriction map π_ : C^0() → L^2(,μ_), where π_N f = (f(x_0),…,f(x_-1) ). Moreover, given any f, g ∈ L^2(,μ_), we have ⟨ f, g ⟩_L^2(,μ_) = f⃗·g⃗/ N, where · denotes the canonical inner product on ℂ ^N.Kernel integral operators.In the data-driven setting, given a continuous kernel k: M × M →, we define a kernel integral operator K_N : L^2(, μ_N) → C^0() by (cf. (<ref>))K'_N f( x ) = ∫_ k(x,y) f(y) dμ_N( y ) = 1/N ∑_n=0^N-1 k(x,x_n) f(x_n),and we also set K_N : L^2(,μ_N) → L^2(, μ_N) and K”_N : C^0() → C^0() with K_N = π_N ∘ K'_N and K”_N = K'_N ∘π_N. Note that K_ can be represented by an N × N matrix K with elements K_ij = k(x_i, x_j ). In this representation, the function g = K_N f, f ∈ L^2(,μ_N), is represented by g⃗ =K f⃗.When k=k_Q from (<ref>), one can similarly define operators K'_Q,N : L^2(,μ_N) → C^0(), K_Q,N : L^2(,μ_N) → L^2(,μ_N), and K”_Q,N : C^0() → C^0(). This family of operators has the analogous properties to those stated for K_Q in Lemma <ref>; namely, the functions ρ_Q,N = K”_Q,N 1_ and σ_Q,N = K”_Q,N( 1/ρ_Q,N) are both continuous, positive, and bounded away from zero on . Therefore, one can define a kernel p_Q,N : M × M → byρ_Q,N = K”_Q,N 1_, σ_Q,N = K”_Q,N( 1/ρ_Q,N),p_Q,N(x,y) = k_Q,N(x,y)/σ_Q,N(x)ρ_Q,N(y).The kernel p_Q,N has the Markov property, i.e., ∫_p_Q,N(x,·) dμ_N = 1 for every x ∈ M. Associated to p_Q,N are the Markov operators P'_Q,N : L^2(,μ_N) → C^0() , P_Q,N : L^2(, μ_N) → L^2(, μ_N) and P”_Q,N : C^0() → C^0(). Moreover, P_Q,N is related to the self-adjoint operator P̂_Q,N : L^2(𝒰,μ_N) → L^2(, μ_N) with kernel p̂_Q,N : M × M →, p̂_Q,N(x,y) = k_Q(x,y)/σ̂_Q,N(x)σ̂_Q,N(y), σ̃_Q,N=σ_Q,N/ρ_Q,N,via a similarity transformation analogous to (<ref>). From the kernel p̂_Q,N one can construct the operators P̂_Q,N, P̂'_Q,N, and P̂”_Q,N as above.Data-driven basis. We will use the eigenvectors ϕ_j,Q,N of P̂_Q,N as an orthonormal basis of L^2( ,μ_N), and employ the corresponding eigenvalues, 1 = λ_0,Q,N > λ_1,Q,N≥⋯≥λ_N-1,Q,N≥ 0, to define data-driven analogs η_j,Q,N = ( λ_j,Q,N^-1 - 1)/ ( λ_1,Q,N^-1- 1),j ∈ J_N,of the η_j in (<ref>), where J_N = { j : λ_j,Q,N > 0 }. The eigenvalue problem for P̂_Q,N is equivalent to a matrix eigenvalue problem for the N × N symmetric matrix P̂ = [ p̂_Q,N( x_i, x_j ) ] representing P̂_Q,N. Details on the numerical solution of this problem can be found in <cit.>. Note that for kernels k_Q with exponential decay, such as the Gaussian kernels in (<ref>), P̂ can be well approximated by a sparse matrix, allowing scalability of our techniques to large N. To establish convergence of our schemes in the limit of large data, N→∞, we would like to establish a correspondence between the eigenvalues and eigenvectors of P̂_Q,N accessible from data and those of P̂_Q, but because these operators act on the different spaces, a direct comparison of their eigenvectors is not possible. Therefore, as stated in Section <ref>, we will first establish a correspondence between the eigenvalues and eigenvectors of P̂_Q,N (P̂_Q) and those of P̂”_Q,N (P̂”_Q), and show that P̂”_Q,N spectrally converges to P̂”_Q. The latter problem is meaningful since both P̂”_Q,N and P̂”_Q act on C^0().The following correspondence between the spectra of operators holds: * λ_j,Q,N is a nonzero eigenvalue of P̂_Q,N iff it is a nonzero eigenvalue of P̂”_Q,N. Moreover, if ϕ_j,Q,N∈ L^2(,μ_N) is an eigenfunction of P̂_Q,N corresponding to λ_j,Q,N, then φ_j,Q,N = λ_j,Q,N^-1P̂'_Q,Nϕ_j,Q,N∈ C^0() is an eigenfunction of P̂”_Q,N corresponding to the same eigenvalue.* λ_j,Q is a nonzero eigenvalue of P̂_Q iff it is a nonzero eigenvalue of P̂”_Q,N. Moreover, if ϕ_j,Q∈ L^2(,μ) is an eigenfunction of P̂_Q corresponding to λ_j,Q, then φ_j,Q = λ_j,Q^-1P̂'_Qϕ_j,Q∈ C^0() is an eigenfunction of P̂”_Q corresponding to the same eigenvalue. Lemma <ref> is a direct consequence of the definitions of P̂_Q,N and P_Q,N”.Next, we establish spectral convergence of P̂”_Q,N to P̂”_Q. For that, we will need the following notion of convergence of operators. Compact convergence. A sequence of operators A_n on a Banach space B is said to be compactly convergent to an operator A if A_n→ A pointwise, and for every bounded sequence of vectors (f_n)_n∈, f_n ∈ B, the sequence ((A-A_n)f_n)_n∈ has compact closure. The following proposition states that the data-driven operators P̂_Q,N converge compactly, and as result in spectrum; for a proof, see <cit.>, Proposition 11, and <cit.>, Theorem 2.4.1.Let Assumptions <ref>–<ref> hold. Given a trajectory (x_n)_n∈ starting in , the corresponding sequence of operators P̂”_Q, constructed from the observations F( x_0 ), …, F(x_N-1) converges compactly as N →∞ to P̂”_Q. As a result, the sequence P̂_Q, converges spectrally, in the sense of Corollary <ref>, to P̂_Q. In particular, since the nonzero spectrum of a compact operator only consists of isolated eigenvalues, the convergence holds for all nonzero eigenvalues of P̂”_Q,N and the corresponding eigenspaces. The spectral convergence results above follow from Proposition 13 in <cit.>. We will now prove Theorem <ref>. Note that there is some similarity between our methods and papers on spectral convergence of kernel algorithms, e.g., <cit.>, but our assumptions distinguishes Theorem <ref> from previously studied cases. In particular, we do not assume an i.i.d. sequence of observed quantities, or that the sampled sequence (x_n)_n=0^N-1 lies on the support X of the invariant measure (as assumed in <cit.>). Finally, X need not have a manifold structure (as assumed in <cit.> and other manifold learning algorithms). Proof of Theorem <ref>. The claims of the theorem follow from analogous results to Lemma <ref> and Proposition <ref> for the operators P_Q,N, P'_Q,N, P”_Q,N and P_Q, P'_Q, P”_Q. Together, Lemma <ref> and Proposition <ref> imply that every eigenpair (λ_j,Q,ϕ_j,Q) of P̂_Q can be consistently approximated by a sequence of eigenpairs (λ_j,Q,N, ϕ_j,Q,N ) of P̂_Q,N. Moreover, by Corollary <ref>, as Q→∞, (λ_j,Q,ϕ_j,Q) approximates in turn the eigenpair (λ_j,ϕ_j) of P; that is,lim_Q→∞lim_N→∞λ_j,Q,N = λ_j, lim_Q→∞lim_N→∞λ_j,Q,N^-1ιP̂'_Q,Nϕ_j,Q,N = ϕ_j,where the second limit is taken with respect to the L^2(X,μ) norm. Since, as can be seen in (<ref>), the Galerkin scheme in Section <ref> can be entirely formulated using the λ_j and the matrix elements ⟨ϕ_i, V ϕ_j^(2)⟩ of the generator, (<ref>) indicates in turn that we can construct a consistent data-driven Galerkin scheme if we can consistently compute approximate generator matrix elements using the data-driven eigenfunctions ϕ_j,Q,N. To that end, we will employ finite-difference approximations, as described below. Finite-difference approximation. The action V f of the generator on a function f ∈ D( V) is defined via the limit in (<ref>). This suggests that for data sampled discretely at sampling interval Δ t, we can approximate V f by a finite-difference approximation <cit.>. For example, the following are first- and second-order, approximation schemes for V, respectively:V_Δ t f = 1/Δ t (U^Δ t f - f),V_Δ t f = 1/2Δ t(U^Δ tf-U^-Δ tf).In the finite-sample case, we approximate V_Δ t by a corresponding -th order finite-difference operator V_Δ t, : L^2(,μ_) → L^2(,μ_). For example, in the case of the first-order scheme in (<ref>), V_Δ t, N becomesV_Δ t,N f( x_n ) = f(x_n+1) - f(x_n)/Δ t,n ∈{ 0, …, N-2 },and V_Δ t,N f(x_N-1) = 0. To ensure that the approximations V_Δ t, N f converge to the true function Vf for a class of functions of sufficient regularity, the following smoothness conditions are sufficient:is a C^1+α compact manifold for some α>0, and Φ^t|_ is generated by a C^α vector field V⃗. Moreover, F|_∈ C^1+α(;^), and the kernel shape function h:→ is C^1+α. V_Δ t and V_Δ t,N are first-order finite difference schemes, as in (<ref>) and (<ref>), respectively.Under Assumption <ref>, the flow Φ^t is generated by a C^α vector field V⃗ : C^1(𝒰) → C^0(𝒰), and the generator V of the Koopman group is an extension of the latter differential operator. Moreover, we can approximate V⃗ by finite-difference schemes V⃗_Δ t : C^0(𝒰) → C^0(𝒰), defined analogously to (<ref>) with U^Δ t replaced by Φ^t. We then have: Let Assumptions <ref>, <ref>, and <ref> hold. Then for every i,j∈:* The eigenfunctions φ_j,Q,N and φ_j,Q from Lemma <ref> lie in C^1+α(). Moreover, as Δ t → 0, V⃗_Δtφ_j,Q =V⃗φ_j,Q + ϕ_j,Q_C^1+α() O(Δt)^α , where the estimate holds uniformly on .* lim_Δ t→ 0lim_→∞⟨ϕ̂_i, V_Δ t,ϕ̂_j⟩_L^2(,μ_N) = ⟨ϕ_i,Q, Vϕ_j,Q⟩.To prove Claim (i), note that under Assumption <ref>, for a finite number of delays Q, by (<ref>), p̂_Q is a C^1+α-smooth kernel. Hence, according to <cit.>, the ranges of the integral operators P̂'_Q and P̂'_Q,N, and thus φ_j,Q and φ_j,Q,N lie in C^1+α(𝒰). Since the vector field V⃗ is C^α, the trajectories are C^1+α-smooth, and therefore, V⃗φ_j,Q, which is the time derivative along the orbit, has a first-order Taylor expansion. The first-order finite-difference scheme gives the φ_j,Q_C^1+α() O(Δ t)^α error. Claim (ii) is a consequence of Claim (i), in conjunction with the weak convergence of measures in (<ref>) and Lemma <ref>.In many cases, such as flows induced on inertial manifolds in dissipative PDEs <cit.>, the C^1+α regularity in Assumption <ref> cannot be strengthened. Proposition <ref> provides the basis for numerically approximating V for these cases. If M, , V⃗, F and h have a higher degree of smoothness, say C^r for some r≥ 2, then taking V_Δ t to be an r-th order finite-difference scheme would lead to an improved, O(Δ t)^r, convergence.Data-driven Galerkin method. Using the η_j,Q,N from (<ref>), we define the data-driven normalized basis vectors ϕ̂_j^(p) = ϕ_j,Q,N/ η̂^p/2_j,Q,N, j ∈ J_N (cf. the ϕ_j^(p) from Step 1 in Section <ref>), and the associated Galerkin approximation spaces H^p_N,Q,m = {ϕ̂^(p)_j}_j=1^m ⊆ L^2(𝒰,μ_N), m≤ J_N, where we abbreviate H^p_Q,N,J_N =: H^p_Q,N and H^0_Q,N =: H_Q,N. We also define the positive semidefinite, self-adjoint operator Δ_Q,N : H_Q,N→ H_Q,N, whereΔ_Q,N f = ∑_j∈J_N η_j,Q,N c_j ϕ_j,Q,N, f = ∑_j=0^N-1 c_jϕ_j,Q,N. This operator is a data-driven analog of Δ in (<ref>). With these definitions and the finite-difference approximation of V described above, we pose the following data-driven analog of the Galerkin approximation in Definition <ref>: Find γ∈ ℂ and z ∈ H^2_Q,N,m such that for all f ∈ H_Q,N,m, A_Δt, Q,N( f, z ) = ⟨f, z ⟩_L^2(,μ_N),where A_Δ t, Q, N : H_Q,N× H_Q,N→ ℂ is the sesquilinear form defined asA_Δt,Q,N( f, z ) = ⟨f, V_Δt, z ⟩_L^2(,μ_N) - θ⟨f, Δ_Q,N z ⟩_L^2(,μ_N).Numerically, this is equivalent to solving a matrix generalized eigenvalue problem analogous to that in (<ref>), viz.A c⃗ = λ B c⃗,where A and B are m × m matrices with elementsA_ij = A_Δ t, Q,N( ϕ_i,Q,N, ϕ_j,Q,N^(2) ) =V_ij/η_j,Q,N - θΔ_ij, V_ij = ⟨ϕ_i,Q,N, V_Δ t,Nϕ_j,Q,N⟩_L^2(,μ_N), Δ _ij =δ_ij,B_ij = ⟨ϕ_i,Q,N, ϕ_j,Q,N^(2)⟩_L^2(,μ_N) = η_i,Q,N^-1δ_ij, respectively, and c⃗ = ( c_1, …, c_m )^⊤ is a column vector in ℂ^m containing the expansion coefficients of the solution z = ∑_j=1^m c_j ϕ̂_j,Q,N^(2) in the {ϕ̂_j^(2)} basis of H^2_Q,N,m. Analogously to the continuous case, we define a data-driven Dirichlet energy functional E_Q,N on H^2_Q,N, given by E_Q,N(f) = ⟨ f, Δ_Q,N f ⟩_L^2(𝒰,μ_N), and use that functional to order the computed eigenfunctions in order of increasing Dirichlet energy. Note that, unless an antisymmetrization is explicitly performed, in the data driven setting, V_ij will generally not be equal to -V_ji, and thus γ will not be equal to -θ E_Q,N(z) (cf. Section <ref>). Nevertheless, in practice we observe that γ≈ - θ E_Q,N(z), at least for the leading eigenfunctions. For any fixed m, and up to similarity transformations, the matrices A and B converge in the limits Q →∞, after Δ t → 0, after N→∞ (in that order) to the corresponding matrices in the variational eigenvalue problem in (<ref>). We therefore conclude that the data-driven Galerkin method in Definition <ref> is consistent (as Δ t → 0 and Q,N→∞) with the Galerkin method in Definition <ref>, which is in turn consistent (as m→∞) with the weak eigenvalue problem for the regularized generator L_θ in Definition <ref>. § RESULTS AND DISCUSSION In this section, we apply the methods described in Sections <ref>–<ref> to two ergodic dynamical systems with mixed spectrum, constructed as products of either a mixing flow on the 3-torus, or the L63 system, with circle rotations. Our objectives are to demonstrate that (i) the results of Theorem <ref> and Corollaries <ref> and <ref> hold, that is, the eigenfunctions ϕ̂_j of P_Q,N from (<ref>) are eigenfunctions of U^t; (ii) the eigenvalues obtained using the Galerkin scheme in Definition <ref> are consistent with those expected theoretically.§.§ Two systems with mixed spectrumThe first system studied below is based on a strongly mixing flow on the 3-torus introduced by Fayad <cit.>. The flow, denoted by Φ_𝕋^3^t, is given by the solution of the ordinary differential equation (ODE) d( x,y, z ) / dt = V⃗( x,y,z), where ( x, y, z ) ∈ 𝕋^3, and V⃗ is the smooth vector fieldV⃗( x, y, z ) = ν⃗/ φ( x, y, z ), φ(x,y,z) = 1+∑_k=1^∞e^-k/k[ ∑_|l|≤ k e^ik(x+y)+il z],parameterized by the constant frequency vector ν⃗. Hereafter, we set ν⃗=(√(2),√(10),1)^⊤. Note that the orbits under Φ_𝕋^3^t are the same as that of the ergodic, non-mixing linear flow with constant vector ν⃗. Φ_𝕋^3^t has a unique Borel, invariant, ergodic probability measure μ with density φ / ∫_M φd relative to Lebesgue measure. Such flows are also called reparameterized flows as ν⃗ is scaled by the function φ at each point (x,y,z)∈3.This system is strongly mixing with respect to its invariant measure, i.e., its generator has purely continuous spectrum <cit.>. To construct an associated mixed-spectrum system, we take the product Φ_𝕋^3^t×Φ_ω^t with a periodic flow Φ_ω^t on S^1, defined asd Φ_ω^t(α)/dt = ω, ω=1.Thus, the state space of the product system is M =𝕋^3 × S^1 =𝕋 ^4. Note that in this example the attracting set X is smooth and coincides with the state space, M = X; in particular, all states sampled experimentally lie exactly on X. Moreover, the Koopman generator V: D(V) → L^2( X, μ ) is a skew-adjoint extension of the differential operator V⃗⊕ω⃗: C^∞( X ) → L^2( X, μ ), where ω⃗: C^∞(S^1) → C^∞(S^1) is the differential operator f ↦ω⃗( f ) := ω f'. Since Φ_ω^t has a pure point spectrum consisting of integer multiples of i ω and Φ_𝕋^3^t has no eigenvalues, the discrete spectrum of the product system is { i k ω, k ∈ ℤ}.The second system that we study is based on the L63 system <cit.>. This system is known to have a chaotic attractor X_Lor⊂^3 with fractal dimension 2.0627160 <cit.>, supporting a physical invariant measure <cit.> with a compact absorbing ball <cit.> and mixing dynamics <cit.>. The latter, implies that the generator V of the system has only constant eigenfunctions, corresponding to eigenvalue 0. The flow, denoted by Φ^t_Lor, is generated by a smooth vector field V⃗∈ C^∞( ^3; ^3 ), whose components at (x,y,z)∈^3 areV^(x) = σ(y-x),V^(y) = x(ρ-z) -y,V^(z) = xy-β z.Throughout, we use the standard parameter values β = 8/3, ρ = 28, σ = 10. As in the torus case, we form the product Φ^t_Lor×Φ_ω^t with the rotation Φ_ω^t in (<ref>), leading to a mixed spectrum system with the same discrete spectrum { i k ω, k ∈ ℤ}. Note that unlike the torus-based system, the attracting set X = X_Lor× S^1 is a strict subset of the state space M = ^3 × S^1.For each product system, we define a continuous map F : M →ℝ ^3 coupling the degrees of freedom of the continuous-spectrum subsystem with the rotation. In the case of the torus-based system, we define F( x, y, z, α ) = ( F_1, F_2, F_3 ), ( x, y, z ) ∈ 𝕋^3, α∈ S^1, via additive coupling, viz.F_1 = sinα+ sin x,F_2 = cosα+ sin y,F_3 = sin(2α) + sin z.In the case of the L63-based system, the coupling is nonlinear with F( x, y, z, α )= ( F_1, F_2, F_3 ), ( x, y, z ) ∈^3, α∈ S^1, and F_1 = sin(α + x),F_2 = cos(2α + y),F_3 = cos(α + z). §.§ Experimental results We generated numerical trajectories x_0, x_1, …, x_N-1 of the torus- and L63-based systems described above starting in each case from an arbitrary initial condition y ∈ M. In the torus experiments, the system is always on the attractor, so the starting state x_0 in the training data was set to y. In the L63 experiments, we let the system relax towards the attractor, and set x_0 to a state sampled after a long spinup time (4000 time units); that is, we formally assume that y (and therefore x_0) lie in the basinof the physical measure associated with X. In both cases, the number of samples was N=50,000, the integration time-step was 0.01, and the number of delays was Q=2000. Gaussian kernels k_Q from (<ref>) were used throughout. We employed thesolver of Matlab to compute the trajectories and generated time series F( x_0 ), F( x_1 ), …, F( x_N-1 ) by applying the observation maps in (<ref>) and (<ref>) to the respective states x_n. Portions of the observable time series from each system are displayed in Fig. <ref>. Note that the x_n were not presented to our kernel algorithm.We computed data-driven eigenpairs ( λ_j,Q,N,ϕ_j,Q,N) by solving the eigenvalue problem for the operator P̂_Q,N from (<ref>) , using Matlab'siterative solver. Henceforth, for ease of notation, we abbreviate λ_j,Q,N and ϕ_j,Q,N by λ̂_j and ϕ̂_j, respectively. The bandwidth parameter ϵ of the Gaussian kernels was selected using the tuning procedure described in <cit.>, which yielded ϵ≈ 3.6 and ≈ 2.053 for the torus and L63 systems, respectively. Representative eigenfunctions ϕ̂_j, plotted as time series n →ϕ̂_j( x_n ), and the corresponding eigenvalues are displayed in Figs. <ref> and <ref>, respectively. We now describe these results in more detail.According to Theorem <ref> and Proposition <ref>, at large numbers of delays (here, Q=2000), the eigenfunctions ϕ̂_j of P̂_Q,N should form doubly degenerate pairs, and each pair should exhibit a single frequency associated with an eigenvalue of V.More precisely, ϕ̂_j± i ϕ̂_j+1 with j ∈{ 1, 3, …} should approximate an eigenfunction of V. Both systems studied here have exactly one rationally independent eigenvalue iω=i, so the eigenfunctions of P_Q,N are expected to evolve at frequencies jω, j∈. This is evidently the case in the time series plots in Fig. <ref>. Also, each of the ϕ̂_j has multiplicity 2 (note that only one eigenfunction from each eigenspace is shown in Fig. <ref>). The left-hand panels of Fig. <ref> show a matrix representation of the generator V (approximated via the finite-difference scheme in (<ref>)) in the 51-dimensional data-driven subspace spanned by ϕ̂_0, …, ϕ̂_50. The matrix is, to a good approximation, skew-symmetric, consistent with the fact that V is a skew-symmetric operator, and exhibits prominent 2 × 2 diagonal blocks associated with the eigenspaces of V approximated by ( ϕ̂_1, ϕ̂_2 ), ( ϕ̂_3,ϕ̂_4 ), …, in agreement with Corollary <ref>.Figure <ref> shows the approximated eigenvalues γ_j of the regularized generator L_θ obtained from this basis using the Galerkin scheme in Definition <ref> with the diffusion regularization and spectral order parameters θ = 10^-4 and m = 50, respectively. Each plot in Fig. <ref> shows the first 20 eigenvalues corresponding to eigenfunctions of increasing Dirichlet energy E( z_j ) of the corresponding eigenfunction z_j (recall that γ_j ≈ - θ E_Q,N( z_j)). According to Section <ref>, the imaginary parts of the γ_j should approximate the Koopman eigenfrequencies j(k) ω, where j is an integer-valued function giving the frequency of the Koopman eigenfunction with the k-th smallest Dirichlet energy. In Fig. <ref>, the γ_j are indeed equal to integer multiples of ω = 1 to a good approximation for the firsteigenvalues (ordered in order of increasing Dirichlet energy). For indices k close to m, the accuracy of the eigenvalues begins to deteriorate. This is due to the facts that (i) even with a “perfect” basis {ϕ̂_j }, eigenfunctions of higher Dirichlet energy (and stronger oscillatory behavior) require increasingly higher-order Galerkin approximation spaces; (ii) at finite sample numbers N, the quality of the data-driven elements ϕ̂_j degrades at large j. §.§ Discussion The examples presented in Sections <ref> and <ref> are Cartesian products of weak mixing and quasiperiodic flows, with their phase variables combined through some observation map. We begin with some observations about our kernel method applied to Cartesian products.Cartesian products. Let (X,Φ_X^t,μ_X) and (Y,Φ_Y^t,μ_Y) be two ergodic flows on compact metric spaces with purely continuous and pure-point spectra, respectively. We are interested in the measure-preserving mixed spectrum dynamical system (X× Y,Φ_X^t×Φ_Y^t ,μ_X×μ_Y). It is well known that the space L^2(X× Y,μ_X×μ_Y) is densely spanned by products of the form {f⊗ g : f∈ L^2(X,μ_X), g∈ L^2(Y,μ_Y)}. Recall that the observation map F is the basis of our construction of all our data-driven operators.Corollary <ref> below is a direct consequence of Proposition <ref>, and gives an “observability” condition that must be fulfilled by the observation map F in order for the methods presented here to yield non-trivial results. Let (X,Φ_X^t,μ_X) and (Y,Φ_Y^t,μ_Y) be as described above, and F∈ L^2(X× Y,μ_X×μ_Y) be the sum F=∑_n=1^∞ f_n⊗ g_n. Then, F_ = ∑_n=1^∞ E(f_n) g_n, where E(f_n) = ∫_X f_n dμ_X. Hence, a necessary and sufficient condition that P is not trivial is that E(f_n) ≠ 0 for at least one n∈. Kernels with a small number of delays. An implicit assumption in the approximation of the operator P in (<ref>) by the operator P_Q in (<ref>) with finitely many delays Q, is that Q is large-enough for the asymptotic analysis of Lemma <ref> to hold. When Q is small, d_Q is closer to a proper metric and therefore, the entries K_ij=exp(-d_Q(x_i,x_j)^2/ϵ) of the kernel matrix K decay rapidly away from the diagonal i=j. Then K_ij is close to a diagonal matrix, and P_i,j is close to the identity matrix. On the other hand, for Q large, d_Q becomes a pseudo-metric and P_i,j is not necessarily close to a diagonal matrix. Figure <ref> shows how the Koopman eigenvalues computed for the two examples from (<ref>) and (<ref>) cluster near 1 for Q=1 and decay more rapidly for Q = 2000. Systems with purely continuous spectra. An important assumption of our kernel-based method is that the dynamics has Koopman eigenvalues, i.e.,contains non-constant functions. This underlies the ability of our regularized operator L_θ in (<ref>) to be a suitable substitute of V (Theorem <ref>). In fact, by Theorem <ref>, in the limit of infinitely many delays Q→∞, ifonly contains constant functions, then the kernels k_Q, p_Q converge to 0 (in the L^2 sense). However, when using finitely many delays, k_Q≠ 0, and correspondingly p obtained by normalization of k_∞ is not close to 0. It is not currently understood how the operator P_Q should behave for purely continuous spectrum systems (i.e., ={1_X}) and Q<∞. One of the consequences of Theorem <ref>(ii) is that in the limit of Q→∞, the continuous spectrum subspace 𝒟^⊥ is annihilated by the integral operator P_∞, thus rendering this operator ineffective for studying or reconstructing the mixing component of the dynamics. In particular, for weak-mixing systems, P_∞ should have all but one of its eigenvalues to equal to zero. Numerical results shown in Fig. <ref> indicate that the finite-rank, data-driven operator P_Q,N for the L63 system still has nonzero eigenvalues strictly less than 1, but these eigenvalues are clustered around a small value (λ̂_j ≈ 0.1). This behavior is in agreement with Theorem <ref>, according to which all the eigenvalues of P_Q other than 1 should converge to zero as Q →∞. Note that the matrix representation of V (also shown in Fig. <ref>) is still skew-symmetric to a good approximation, since V is a skew-symmetric operator. Intriguingly, the matrix has a 2× 2 block-diagonal form, despite V having no eigenfunctions. This form of the generator matrix has some aspects in common with the recent results of Brunton et al. <cit.>, who obtained a bi-diagonal matrix representation of the L63 generator in a data-driven basis from Hankel matrix analysis. In Fig. <ref>, the lack of Koopman eigenfunctions is evident from the time-series plots of the numerical eigenfunctions ϕ̂_j, which are clearly non-periodic. Moreover, a phase space plot of ϕ̂_2 illustrates that it is a highly rough function on the Lorenz attractor. In light of the above, the results established in this work have implications for delay-embedding techniques, as they point to a tradeoff between reconstruction of the system's state space topology in delay embedding space (favored by large numbers of delays) and the ability of operators for data analysis, such as P_Q, to adequately represent the mixing component of the dynamics. Nevertheless, the ability to consistently approximate the quasiperiodic dynamics through Koopman eigenfunctions is still useful, as it allows identification and efficient modeling (e.g., via (<ref>) of observables with high predictability). At the very least, the “negative” result described above provides a reference point that may aid the design of delay-embedding methodologies aiming to reconstruct the full structure of the dynamics. One of the goals of our future work is to investigate the behavior of the techniques presented here away from the asymptotic limit Q →∞ in the presence of a continuous spectrum. Acknowledgements Dimitrios Giannakis received support from ONR YIP grant N00014-16-1-2649, NSF grant DMS-1521775, and DARPA grant HR0011-16-C-0116. Suddhasattwa Das is supported as a postdoctoral research fellow from the first grant. The authors are grateful to L S Young for her suggestions.§ REFERENCESunsrt
http://arxiv.org/abs/1706.08544v7
{ "authors": [ "Suddhasattwa Das", "Dimitrios Giannakis" ], "categories": [ "math.DS", "37A10, 37E99, 37G30" ], "primary_category": "math.DS", "published": "20170626180134", "title": "Delay-coordinate maps and the spectra of Koopman operators" }
Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100190, China[][email protected] Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100190, China It is of great interest to explore intrinsic two-dimensional ferromagnetism and seek better two-dimensional quantum anomalous Hall insulator materials. Here, we show that the FeBr_3 monolayer is an intrinsic two-dimensional ferromagnetic material whose Curie temperature is 140 K thanks to its strong spin exchange interaction and giant uniaxial magnetic anisotropy. Our phonon spectra and mechanical analysis indicate that the FeBr_3 monolayer is dynamically and mechanically stable. Our electronic structure calculation shows that there is one Dirac cone at K point in the Brillouin zone and the spin-orbit coupling opens a semiconductor gap of 33.5 meV. Further tight-binding analysis reveals that the Chern number is equivalent to 1 and there is a quantum anomalous Hall conductivity σ _xy = e^2/h,and the chiral edge states are fully spin-polarized when an edge is created. Furthermore, it is shown that the main results are not affected by electron correlation effects and biaxial strain. Therefore, this FeBr_3 monolayer as 2D material would be useful for spintronic applications. Intrinsic 2D ferromagnetism, quantum anomalous Hall conductivity, and fully-spin-polarized edge states of FeBr_3 monolayer Bang-Gui Liu December 30, 2023 ========================================================================================================================== § INTRODUCTION Recent experimental discovery of intrinsic two-dimensional ferromagnetism <cit.> attracts people's attention on atomic monolayers as two-dimensional materials. As the Mermin-Wagner theorem implies <cit.>, it is very difficult to truly realize two-dimensional ferromagnetism at non-zero temperature. In addition to strong spin exchange interactions, a strong uniaxial magnetic anisotropy is necessary to achieve a high Curie temperature. Most importantly, key properties must be robust against strain in such two-dimensional materials, to make heterostructures for practical devices. If the spin-orbit coupling opens a global gap and there exists non-zero Chern number indicating nontrivial topological properties, quantum anomalous Hall effect can be achieved in such two-dimensional ferromagnetic (or ferrimagnetic) materials. It will be of much interest to achieve stable intrinsic 2D ferromagnetism with high Curie temperature and high spin polarization.Quantum anomalous Hall (QAH) effect was first theoretically predicted by introducing magnetism into two-dimensional honeycomb model <cit.>, where the magnetism breaks the time reversal symmetry and external magnetic field is not necessary. Then, transition metal (TM) doped topological insulators <cit.>, graphene-based materials  <cit.>, quantum wells <cit.>, and heterostructures <cit.> are theoretically predicted to achieve the QAH effect. In these systems, the transition metal atoms can make the magnetism and the spin-orbit coupling (SOC) can open the global band gap, and then appear QAH conductivity and dissipationless chiral edge states. These features can help the QAH insulators to find applications in the low-power-consumption electronic devices <cit.>, but on the experimental side, the QAH effect has only been observed in Cr or V doped (Bi,Sb)_2Te_3 thin film at very low temperature (<85 mK) <cit.>. It has been theoretically predicted that some transition-metal compound monolayers can host the quantum anomalous Hall effect <cit.>, but it is still highly desirable to seek better, experimentally realizable two-dimensional QAH insulator materials, especially with fully-spin-polarized edge states.Here, we propose that FeBr_3 monolayer can host intrinsic 2D ferromagnetism, quantum anomalous Hall effect, and fully-spin-polarized edge states. Our investigation shows that it is dynamically and mechanically stable, and its Curie temperature is 140 K thanks to the strong perpendicular uniaxial magnetic anisotropy. There is one Dirac point at the K point, and the spin-orbit coupling creates a global gap of 33.5 meV. Our tight-binding calculation with maximally localized Wannier functions reveals that the quantum anomalous Hall conductivity is σ_xy=e^2/h and there are fully-spin-polarized chiral edge states when a edge is made. The key properties of the system remain robust when biaxial strain is applied. The more detailed results will be presented in the following.§ COMPUTATIONAL METHODS First-principles spin-polarized calculations are done with the projector-augmented wave (PAW) method <cit.> as implemented in the Vienna Ab initio Simulation Package (VASP) <cit.>. The generalized gradient approximation (GGA) by Perdew, Burke, and Ernzerhof <cit.> is used for the exchange-correlation potential. The Brillouin zone integration is carried out with a Γ-centered (15×15×1) Monkhorst-Pack grid <cit.>. The structures are fully optimized to ensure that all the Hellmann-Feynman forces on each atom are less than 0.01 eV/Å and the total energy difference between two successive steps is smaller than 10^-6 eV. To ensure the structural stability of the monolayers, phonon spectra are calculated in terms of the density functional perturbation theory as implemented in the PHONOPY program <cit.>. The spin-orbit coupling is taken into account to study the magnetocrystalline anisotropy. Band dispersion calculations with GGA+U <cit.> functional are carried out to make further confirmation. The tight-binding Hamiltonian is constructed with the help of maximally localized Wannier functions (MLWFs) <cit.> from the DFT calculated bands, as implemented in the Wannier90 code <cit.>. The surface state spectrum of a semi-infinite system is obtained by the surface Green's function <cit.>. The calculation of Berry curvature is carried out with a denser k mesh (121×121×1). § RESULTS AND DISCUSSION §.§ Structure and stability On experimental side, bulk FeBr_3 assumes a layered structure like graphite, and each of FeBr_3 monolayers is similar to graphene and the monolayers are bound through van der Waals interaction. As for magnetic ordering, it is ferromagnetic within the monolayer and antiferromagnetic between the nearest monolayers. We study one FeBr_3 monolayer in terms of first-principles investigation. The calculational model is constructed by repeating the FeBr_3 monolayer and adding a vacuum layer between adjacent FeBr_3 monolayers. The optimized structure of FeBr_3 monolayer is like that of CrI_3 monolayer. Each Fe is coordinated by six Br atoms with Fe-Br bond length of 2.42 Å, and the Fe atomic layer is sandwiched between the two Br layers (the Br-Br plane distance is the 2.75 Å), as shown in Fig. 1. The unit cell includes two Fe atoms and six Br atoms, and the equilibrium lattice constant is 6.29 Å. The Fe atoms form a honeycomb lattice, with the Fe-Fe distance being 3.63Å, and the two nearest Fe atoms share two Br atoms belonging to different Br layers. The Fe d electrons make a local magnetic moment of 1μ _B, staying in the low-spin state, and the ground-state phase is ferromagnetic because the antiferromagnetic structures are higher by at least 75 meV per formula unit.The formation energy, defined as E_f=E_ FeBr_3-1/4μ _ Fe-3/4μ _ Br, is calculated, being equivalent to -0.30 eV/atom. The negative value favors experimental synthesis. We also calculate the interlayer binding energy by E_b = (E_ monolayer-E_ bulk/N)/S, where E_ monolayer and E_ bulk are the total energies of the monolayer and bulk FeBr_3 per unit cell, S is the area of monolayer per unit cell, and N is the number of layers in bulk FeBr_3. Our calculated binding energy of FeBr_3 monolayer is 11.3 meV/Å^2, smaller than experimental value of graphene (23.3 meV/Å^2, vs. graphite) <cit.>. This implies that the FeBr_3 monolayer can be exfoliated from bulk FeBr_3 material. In Fig. 1c, we also present the phonon spectra of the FeBr_3 monolayer. It is clear that there exists no imaginary frequencies, which ensures the dynamical stability of the FeBr_3 monolayer.To evaluate the mechanical stability of the FeBr_3 monolayer, we calculate the elastic moduli, obtaining C_11 = 35.9 N/m, C_22 = 36.1 N/m, C_12 = 11.6 N/m, and C_66 = 12.9 N/m. The Young's module, nearly isotropic in the plane, is equivalent to 32 N/m. They satisfy the criteria of stability of two-dimensional materials (C_11C_22 > C_12C_21 and C_66 > 0) <cit.>. For the square flake, the ratio between the out-of plane deformation h induced by its own gravity and the edge length l is h/l ≈ (ρ gl/Y)^1/3, where g being the gravitational acceleration and ρ the density of two-dimensional materials (2.89×10^-6 kg/m^2). When l is 100 μm, the ratio becomes 4.4×10^-4 which is so small that the FeBr_3 monolayer can keep its stability without the support of a substrate. §.§ Electronic structure The spin-resolved density of states (DOS) of the FeBr_3 monolayer are presented in Fig. 2. The triangular antiprismatic crystal fieldsplits Fe atom's d orbitals into a triplet (d_xy, d_x^2-y^2, and d_z^2) and a doublet (d_yz and d_xz). For the spin-up channel, the triplet DOS is peaked at -0.7 eV and the doublet DOS at 0.8 eV; and for the spin-down channel, they are peaked at -0.3 and 1.2 eV, respectively. In terms of the crystal field theory, the relative strength of the crystal field splitting (Δ E_cf∼ 1.5 eV) and spin exchange splitting (Δ E_ex∼ 0.4 eV) leads to a low spin state for Fe atom. Each Fe atom has five valence electronsand they all occupy the triplet bands: three for spin-up and two for spin-down. These makes 1μ _B for a Fe atom. The energy bandswithout the spin-orbit coupling (SOC) are presented in FIG. 3a. The spin-up energy bands show the feature of semiconductor with direct gap of 1.23 eV, and the valence band maximum (VBM) and conduction band minimum (CBM), both located at the M point, are well separated from the Fermi level. As for the spin-down energy bands, it is clear that the six triplet bands (for two Fe atoms) are located in the spin-up gap, and there is one Dirac point (mainly from d_xy and d_x^2-y^2 orbitals) located at the K point. Therefore, the FeBr_3 monolayer is a half metal, implying that it is fully spin polarized. Our further test calculations show that this electronic structure is robust when a correlation U parameter is introduced and varies from 0.5 eV to 2 eV.The band structure with SOC taken into account is presented in Fig. 3b. It is important that there is a semiconductor gap open at the Fermi level and therefore the FeBr_3 monolayer is a barrow-gap semiconductor. The gap 65 meV at the Dirac point can be attributed to the large SOC effect from Fe d orbitals under the special octahedral environment. The energy degeneracy at the Γ point near the Fermi level is also lifted, which makes the lower-energy band at the Γ point become the new conduction band minimum. As a result, the global semiconductor gap of the FeBr_3 monolayer is 33.5 meV. We also confirm the key results by considering the U parameter. When U reaches to 2 eV, the FeBr_3 monolayer is still a semiconductor with a global gap of 60 meV.§.§ Intrinsic 2D ferromagnetism The ferromagnetic order comes from the Fe moments of 1μ _B and the Fe spins form a honeycomb lattice of 3.63Å. According to the Goodenough-Kanamori-Anderson (GKA) rules <cit.>, the superexchange interaction from d-p-d path usually tends to be antiferromagnetic if the cation-anion-cation bond angle is 180^∘, but it can be ferromagnetic if the cation-anion-cation bond angle is near 90^∘. The FeBr_3 monolayer belongs to the latter because the Fe-Br-Fe angle is 97^∘ which is close to 90^∘. As a result, the inter-spin interaction in the FeBr_3 monolayer is ferromagnetic, even under biaxial strain up to 5%.If the 2D spin system is isotropic, Mermin-Wagner theorem <cit.> implies that the strong thermal fluctuations of gapless long-wavelength modes will destroy the 2D ferromagnetism at finite temperature. In order to achieve true 2D magnetism at finite temperature, some uniaxial magnetic anisotropy is necessary. Our first-principles calculations reveal that the total energy of the FeBr_3 monolayer is dependent on the spin orientation. The out-of-plane direction is favorable and the in-plane direction is higher by 0.6 meV per formula unit, which is comparable to that of CrXTe_3 (X = Si,Ge,Sn, the magnetic anisotropic energies are 0.069-0.419 meV/f.u.) <cit.>. The magnetic anisotropic energy is originated from the SOC effect. Our first-principles calculation reveals that the z-component orbital moment of each Fe atom is ⟨ L_z ⟩ = 0.215 ħ. This large orbital moment can explain the large magnetic anisotropy.The magnetic moment of 1μ _B means that the Fe spin is s=1/2 and (S_i^z)^2 is always 1/4, implying no single-ion anisotropy. Because of the crystal symmetry of the FeBr_3 monolayer, the effective spin Hamiltonian can be written asH = -∑_ij(J_ijS_i·S_j+λ_ijS^z_iS^z_j)where the J term is the isotropic spin exchange interaction and the λ term describes the Ising exchangeanisotropy. It is expected that the exchange constant J_ij and the Ising anisotropy λ_ij are both limited to the nearest ⟨ ij⟩ and next-nearest ⟨⟨ ij⟩⟩ neighbors, namely J_⟨ ij⟩=J_1, λ_⟨ ij⟩=λ_1, and J_⟨⟨ ij⟩⟩=J_2, and λ_⟨⟨ ij⟩⟩=λ_2. Actually, by comparing the total energies of ferromagnetic and anti-ferromagnetic states with the spins orientating parallel and perpendicular to the Fe plane, we obtain that J_1 = 48.4 meV, J_2 = -0.77 meV, and λ _1 = 1.70 meV, λ _2 = -0.45 meV. This is a two-dimensional quantum spin model with Ising anisotropy. It is not easy to obtain its accurate phase transition temperature, but we can obtain a lower limit to it by performing Monte Carlo simulation of the spin model with the classical spin approximation. We take the 100×100 spin supercell to perform Monte Carlo simulation and thereby show that the Curie temperature T_c is about 140 K. The spin wave gap Δ is 2.4 meV (3λ _1+6λ_2) <cit.>, which guarantees the existence of 2D long-rang magnetic ordering at non-zero temperature. §.§ Quantum anomalous Hall effect To identify the topological properties of the FeBr_3 monolayer, we calculate the Berry curvature (Ω_z) and Chern invariant (C) through the effective tight-binding model of the first-principles energy bands. The Berry curvature is defined by <cit.>Ω_z(k)=-2∑_m ≠ nIm ⟨Ψ _nk|v_x | Ψ _mk⟩⟨Ψ _mk|v_y | Ψ _nk⟩/(E_nk-E_mk)^2where n and m are the band indexes and the summation is done for the occupied bands. The Chern number is defined as the integral of the Berry curvature (Ω_z) over the first Brillouin zone (BZ). Our calculation shows that the Chern number is 1. This means that the quantum anomalous Hall conductivity can be expressed asσ_xy = e^2/h.Therefore, the FeBr_3 monolayer is a Chern insulator and can host quantum anomalous Hall effect.Furthermore, we can construct a semi-infinite sheet of the FeBr_3 monolayer and investigate its edge states using the effective TB model. We present the edge states (and the bulk states) in Fig. 4a. It is clear that the edge states form a chiral band crossing the Fermi level, which is because the intrinsic ferromagnetism has broken the time reversal symmetry. It is very important that the chiral edge states are fully-spin-polarized, which is promising for spintronics applications. We also show the k-resolved Berry curvature in FIG. 4b. It is interesting that the Berry curvature is mainly located at the K and Γ points in the Brillouin zone.§.§ Effect of biaxial strain We can investigate the robustness of topological features by applying biaxial strain to the FeBr_3 monolayer. With the strain η given, the stress can be calculated by σ = ∂ E/ 2S∂η, where S is the area of unit cell. The strain will change the SOC-induced semiconductor gap E_g, the Curie temperature T_c, and the magnetic anisotropy (MAE). We present the strain dependences of the stress, E_g, T_c, and MAE in Fig. 5. The semiconductor gap E_g decreases with the strain, reducing to 20.7 meV at the strain of 5%. The biaxial strain of 5% corresponds to the stress of 7.6 N/m. When the strain is less than 5%, we still have the Chern number, the quantum anomalous Hall effect, and the fully-spin-polarized edge states in the FeBr_3 monolayer. In addition, the biaxial strain enhances the Curie temperature T_c, but reduces magnetic anisotropic energy. It is interesting that the biaxial strain of 5% effectively promotes the Curie temperature to 175 K.§ CONCLUSION In summary, we have shown that the FeBr_3 monolayer is an intrinsic two-dimensional ferromagnetic material whose Curie temperature is 140 K because the distorted octahedral crystal field leads to unquenched Fe orbital moment and then produces giant uniaxial magnetic anisotropy. Our phonon spectra and mechanical analysis indicates that the FeBr_3 monolayer is dynamically and mechanically stable. Our electronic structure calculation shows that there is one Dirac cone at K point in the Brillouin zone and the spin-orbit coupling opens a global gap of 33.5 meV. Further TB analysis reveals that the Chern number is equivalent to 1 and there is a quantum anomalous Hall conductivity σ _xy = e^2/h,and there are fully-spin-polarized edge states when an edge is created. It has been shown that the main results are not affected by the Hubbard U parameter, and also remain robust when a biaxial strain (up to 5%) is applied. We believe that the FeBr_3 monolayer will be useful for spintronic applications.This work is supported by the Nature Science Foundation of China (No.11574366), by the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No.XDB07000000), and by the Department of Science and Technology of China (Grant No.2016YFA0300701). The calculations were performed in the Milky Way #2 supercomputer system at the National Supercomputer Center of Guangzhou, Guangzhou, China. 42 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Samarth(2017)]add1 author author N. Samarth, http://dx.doi.org/10.1038/546216a journal journal Nature volume 546, pages 216 (year 2017)NoStop [Gong et al.(2017)Gong, Li, Li, Ji, Stern, Xia, Cao, Bao, Wang, Wang, Qiu, Cava, Louie, Xia, andZhang]add2 author author C. Gong, author L. Li, author Z. Li, author H. Ji, author A. Stern, author Y. Xia, author T. Cao, author W. Bao, author C. Wang, author Y. Wang, author Z. Q. Qiu, author R. J.Cava, author S. G. Louie, author J. Xia,andauthor X. Zhang, http://dx.doi.org/10.1038/nature22060 journal journal Nature volume 546, pages 265 (year 2017)NoStop [Huang et al.(2017a)Huang, Clark, Navarro-Moratalla, Klein, Cheng, Seyler, Zhong, Schmidgall, McGuire, Cobden, Yao, Xiao, Jarillo-Herrero, andXu]huang2017layer author author B. Huang, author G. Clark, author E. Navarro-Moratalla, author D. R. Klein, author R. Cheng, author K. L. Seyler, author D. Zhong, author E. Schmidgall, author M. A.McGuire, author D. H.Cobden, author W. Yao, author D. Xiao, author P. Jarillo-Herrero,and author X. Xu, @noopjournal journal Nature volume 546,pages 270 (year 2017a)NoStop [Mermin and Wagner(1966)]PhysRevLett.17.1133 author author N. D. Mermin and author H. Wagner, 10.1103/PhysRevLett.17.1133 journal journal Phys. Rev. Lett. volume 17, pages 1133 (year 1966)NoStop [Haldane(1988)]PhysRevLett.61.2015 author author F. D. M.Haldane, 10.1103/PhysRevLett.61.2015 journal journal Phys. Rev. Lett. volume 61, pages 2015 (year 1988)NoStop [Yu et al.(2010)Yu, Zhang, Zhang, Zhang, Dai, and Fang]yu2010quantized author author R. Yu, author W. Zhang, author H.-J. Zhang, author S.-C. Zhang, author X. Dai,and author Z. Fang, @noopjournal journal Science volume 329, pages 61 (year 2010)NoStop [Zhang et al.(2012)Zhang, Lazo, Blügel, Heinze, andMokrousov]PhysRevLett.108.056802 author author H. Zhang, author C. Lazo, author S. Blügel, author S. Heinze,and author Y. Mokrousov, 10.1103/PhysRevLett.108.056802 journal journal Phys. Rev. Lett. volume 108, pages 056802 (year 2012)NoStop [Qiao et al.(2014)Qiao, Ren, Chen, Bellaiche, Zhang, MacDonald, and Niu]PhysRevLett.112.116404 author author Z. Qiao, author W. Ren, author H. Chen, author L. Bellaiche, author Z. Zhang, author A. H. MacDonald,and author Q. Niu, 10.1103/PhysRevLett.112.116404 journal journal Phys. Rev. Lett. volume 112, pages 116404 (year 2014)NoStop [Zhang et al.(2015)Zhang, Zhao, Yao, and Yang]PhysRevB.92.165418 author author J. Zhang, author B. Zhao, author Y. Yao,and author Z. Yang, 10.1103/PhysRevB.92.165418 journal journal Phys. Rev. B volume 92, pages 165418 (year 2015)NoStop [Wu et al.(2014)Wu, Shan, and Yan]PhysRevLett.113.256401 author author S.-C. Wu, author G. Shan,andauthor B. Yan, 10.1103/PhysRevLett.113.256401 journal journal Phys. Rev. Lett. volume 113, pages 256401 (year 2014)NoStop [Zhang et al.(2014)Zhang, Wang, Xu, Xu, andZhang]PhysRevLett.112.096804 author author H. Zhang, author J. Wang, author G. Xu, author Y. Xu,and author S.-C. Zhang, 10.1103/PhysRevLett.112.096804 journal journal Phys. Rev. Lett. volume 112, pages 096804 (year 2014)NoStop [Cai et al.(2015)Cai, Li, Wang, Ju, Feng, and Gong]TiO2 author author T. Cai, author X. Li, author F. Wang, author S. Ju, author J. Feng,and author C.-D.Gong, 10.1021/acs.nanolett.5b01791 journal journal Nano Lett. volume 15, pages 6434 (year 2015)NoStop [Garrity and Vanderbilt(2013)]PhysRevLett.110.116802 author author K. F. Garrity and author D. Vanderbilt, 10.1103/PhysRevLett.110.116802 journal journal Phys. Rev. Lett. volume 110, pages 116802 (year 2013)NoStop [Weng et al.(2015)Weng, Yu, Hu, Dai, andFang]QAH author author H. Weng, author R. Yu, author X. Hu, author X. Dai,and author Z. Fang, 10.1080/00018732.2015.1068524 journal journal Adv. Phys. volume 64, pages 227 (year 2015)NoStop [Ren et al.(2016)Ren, Qiao, and Niu]QAH2 author author Y. Ren, author Z. Qiao,andauthor Q. Niu, http://stacks.iop.org/0034-4885/79/i=6/a=066501 journal journal Rep. Prog. Phys. volume 79,pages 066501 (year 2016)NoStop [Chang et al.(2013)Chang, Zhang, Feng, Shen, Zhang, Guo, Li, Ou, Wei, Wang, Ji, Feng, Ji, Chen, Jia, Dai, Fang, Zhang, He, Wang, Lu, Ma, and Qi-Kun]chang2013experimental author author C.-Z. Chang, author J. Zhang, author X. Feng, author J. Shen, author Z. Zhang, author M. Guo, author K. Li, author Y. Ou, author P. Wei, author L.-L. Wang, author Z.-Q. Ji, author Y. Feng, author S. Ji, author X. Chen, author J. Jia, author X. Dai, author Z. Fang, author S.-C.Zhang, author K. He, author Y. Wang, author L. Lu, author X.-C. Ma,and author X. Qi-Kun, @noopjournal journal Science volume 340, pages 167 (year 2013)NoStop [Kou et al.(2014)Kou, Guo, Fan, Pan, Lang, Jiang, Shao, Nie, Murata, Tang, Wang, He, Lee, Lee, andWang]PhysRevLett.113.137201 author author X. Kou, author S.-T. Guo, author Y. Fan, author L. Pan, author M. Lang, author Y. Jiang, author Q. Shao, author T. Nie, author K. Murata, author J. Tang, author Y. Wang, author L. He, author T.-K. Lee, author W.-L. Lee,and author K. L. Wang, 10.1103/PhysRevLett.113.137201 journal journal Phys. Rev. Lett. volume 113, pages 137201 (year 2014)NoStop [Chang et al.(2015)Chang, Zhao, Kim, Zhang, Assaf, Heiman, Zhang, Liu, Chan, and Moodera]chang2015high author author C.-Z. Chang, author W. Zhao, author D. Y. Kim, author H. Zhang, author B. A. Assaf, author D. Heiman, author S.-C. Zhang, author C. Liu, author M. H.Chan,and author J. S.Moodera, @noopjournal journal Nat. Mater. volume 14, pages 473 (year 2015)NoStop [Zhou et al.(2016)Zhou, Sun, and Sun]NiRuCl6 author author P. Zhou, author C. Q. Sun, and author L. Z. Sun, 10.1021/acs.nanolett.6b02701 journal journal Nano Lett. volume 16, pages 6325 (year 2016)NoStop [He et al.(2017)He, Li, Lyu, and Nachtigall]C6NR08522A author author J. He, author X. Li, author P. Lyu,and author P. Nachtigall, 10.1039/C6NR08522A journal journal Nanoscalevolume 9, pages 2246 (year 2017)NoStop [Huang et al.(2017b)Huang, Zhou, Wu, Deng, Jena, andKan]PhysRevB.95.045113 author author C. Huang, author J. Zhou, author H. Wu, author K. Deng, author P. Jena,and author E. Kan, 10.1103/PhysRevB.95.045113 journal journal Phys. Rev. B volume 95, pages 045113 (year 2017b)NoStop [Sheng and Nikoli ćć(2017)]PhysRevB.95.201402 author author X.-L. Sheng and author B. K. Nikoli ćć, 10.1103/PhysRevB.95.201402 journal journal Phys. Rev. B volume 95, pages 201402 (year 2017)NoStop [Chen et al.(2017)Chen, Zou, and Liu]cp author author P. Chen, author J.-Y. Zou, and author B.-G. Liu, 10.1039/c7cp02158e journal journal Phys. Chem. Chem. Phys. volume 19, pages 13432 (year 2017)NoStop [Blöchl(1994)]PhysRevB.50.17953 author author P. E. Blöchl, 10.1103/PhysRevB.50.17953 journal journal Phys. Rev. B volume 50, pages 17953 (year 1994)NoStop [Kresse and Hafner(1993)]PhysRevB.47.558 author author G. Kresse and author J. Hafner, 10.1103/PhysRevB.47.558 journal journal Phys. Rev. B volume 47,pages 558 (year 1993)NoStop [Perdew et al.(1996)Perdew, Burke, and Ernzerhof]PhysRevLett.77.3865 author author J. P. Perdew, author K. Burke, and author M. Ernzerhof, 10.1103/PhysRevLett.77.3865 journal journal Phys. Rev. Lett. volume 77, pages 3865 (year 1996)NoStop [Monkhorst and Pack(1976)]PhysRevB.13.5188 author author H. J. Monkhorst and author J. D. Pack, 10.1103/PhysRevB.13.5188 journal journal Phys. Rev. B volume 13,pages 5188 (year 1976)NoStop [Togo and Tanaka(2015)]TOGO20151 author author A. Togo and author I. Tanaka,http://dx.doi.org/10.1016/j.scriptamat.2015.07.021 journal journal Scr. Mater. volume 108, pages 1(year 2015)NoStop [Dudarev et al.(1998)Dudarev, Botton, Savrasov, Humphreys, and Sutton]PhysRevB.57.1505 author author S. L. Dudarev, author G. A. Botton, author S. Y. Savrasov, author C. J. Humphreys,and author A. P. Sutton, 10.1103/PhysRevB.57.1505 journal journal Phys. Rev. B volume 57, pages 1505 (year 1998)NoStop [Marzari et al.(2012)Marzari, Mostofi, Yates, Souza, and Vanderbilt]RevModPhys.84.1419 author author N. Marzari, author A. A. Mostofi, author J. R. Yates, author I. Souza,and author D. Vanderbilt, 10.1103/RevModPhys.84.1419 journal journal Rev. Mod. Phys. volume 84, pages 1419 (year 2012)NoStop [Mostofi et al.(2008)Mostofi, Yates, Lee, Souza, Vanderbilt, and Marzari]MOSTOFI2008685 author author A. A. Mostofi, author J. R. Yates, author Y.-S. Lee, author I. Souza, author D. Vanderbilt,and author N. Marzari, http://dx.doi.org/10.1016/j.cpc.2007.11.016 journal journal Comput. Phys. Commun. volume 178,pages 685(year 2008)NoStop [Sancho et al.(1984)Sancho, Sancho, and Rubio]GF1 author author M. P. L.Sancho, author J. M. L.Sancho,and author J. Rubio, http://stacks.iop.org/0305-4608/14/i=5/a=016 journal journal J. Phys. F: Met. Phys. volume 14, pages 1205 (year 1984)NoStop [Sancho et al.(1985)Sancho, Sancho, Sancho, and Rubio]GF2 author author M. P. L.Sancho, author J. M. L.Sancho, author J. M. L.Sancho,and author J. Rubio, http://stacks.iop.org/0305-4608/15/i=4/a=009 journal journal J. Phys. F: Met. Phys. volume 15, pages 851 (year 1985)NoStop [Zacharia et al.(2004)Zacharia, Ulbricht, and Hertel]PhysRevB.69.155406 author author R. Zacharia, author H. Ulbricht,and author T. Hertel, 10.1103/PhysRevB.69.155406 journal journal Phys. Rev. B volume 69, pages 155406 (year 2004)NoStop [Andrew et al.(2012)Andrew, Mapasha, Ukpong, and Chetty]PhysRevB.85.125428 author author R. C. Andrew, author R. E. Mapasha, author A. M. Ukpong,andauthor N. Chetty, 10.1103/PhysRevB.85.125428 journal journal Phys. Rev. B volume 85, pages 125428 (year 2012)NoStop [Goodenough(1955)]PhysRev.100.564 author author J. B. Goodenough, 10.1103/PhysRev.100.564 journal journal Phys. Rev. volume 100, pages 564 (year 1955)NoStop [Kanamori(1960)]Kanamori author author J. Kanamori, 10.1063/1.1984590 journal journal J. Appl. Phys. volume 31,pages S14 (year 1960)NoStop [Anderson(1959)]PhysRev.115.2 author author P. W. Anderson, 10.1103/PhysRev.115.2 journal journal Phys. Rev. volume 115,pages 2 (year 1959)NoStop [Zhuang et al.(2015)Zhuang, Xie, Kent, and Ganesh]PhysRevB.92.035407 author author H. L. Zhuang, author Y. Xie, author P. R. C. Kent,andauthor P. Ganesh, 10.1103/PhysRevB.92.035407 journal journal Phys. Rev. B volume 92, pages 035407 (year 2015)NoStop [Lado and Fernndez-Rossier(2017)]CrI3 author author J. L. Lado and author J. Fernndez-Rossier, http://stacks.iop.org/2053-1583/4/i=3/a=035002 journal journal 2D Mater. volume 4, pages 035002 (year 2017)NoStop [Yao and Fang(2005)]PhysRevLett.95.156601 author author Y. Yao and author Z. Fang,10.1103/PhysRevLett.95.156601 journal journal Phys. Rev. Lett. volume 95,pages 156601 (year 2005)NoStop [Guo et al.(2005)Guo, Yao, and Niu]PhysRevLett.94.226601 author author G. Y. Guo, author Y. Yao,andauthor Q. Niu, 10.1103/PhysRevLett.94.226601 journal journal Phys. Rev. Lett. volume 94, pages 226601 (year 2005)NoStop
http://arxiv.org/abs/1706.08943v2
{ "authors": [ "Shi-Hao Zhang", "Bang-Gui Liu" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170627171623", "title": "Intrinsic 2D ferromagnetism, quantum anomalous Hall conductivity, and fully-spin-polarized edge states of FeBr3 monolayer" }